TDIE-DEPENDENT OOEF'FICIENTS IN A rox-lYPE REGIID;SION MODEL
by
S. A. Murphy
and
P. K. Sen
•
Department of Statistics
University of North Carolina
Chapel Hill. NC 27514
Abstract:
Estimation of a time-varying coefficient in a Cox-type
parametrization of the stochastic intensity of a point process is
considered. A sieve estimation procedure (Grenander. 1981) is used to
estimate the coefficient. A rate of convergence in probability for the
sieve estimator is given and a functional CLT for the integrated sieve
estimator is proved.
AMS subject classification:
Keywords:
Point
6OG55. 62M09. 62P10
Process.
Cox
Regression.
Method
of
Sieves
2
o.
Introduction
Suppose an output or dependent counting process. N and an input or
independent covariate process X is observed.
A model relating N to X
which is often used in survival analysis is the Cox Regression Model
(Cox. 1972; Anderson and Gill. 1982).
This model stipulates that the
stochastic intensity of N is
.In the above. the
is
an
r~g~e~s~~p.;~Q~fficient. ~.
unspec1fied.,~eWmin~~~iojJlunction.
is an unknown scalar and AO
Since
~
is constant in time.
the above model implies that the regression relationship between N and X
is stationary.
Since this may not be the case. several authors have
considered a time-varying regression coefficient (Brown. 1975; Taulbee.
1979; Stablein et a1.. 1981 and Zucker and Karr. 1988).
Brown. Taulbee
and Stablein et al. make simplifying assumptions on the form of
to maintain a finite dimensional parameter space.
using
a
penalized
likelihood
technique.
dimensional (Le .• a function of time).
allow
~
so as
Zucker and Karr.
~
to
be
infinite
Their analysis is develo ed
p
within the survival analysis context; that is where N can have at most
one jump.
The method presented here. which also allows
~
to be infinite
dimensional. utilizes the method of sieves (Grenander. 1981). and in
particular. a very simple sieve. the histogram sieve.
This choice of a
sieve retains the simplicity of analysis present in methods involving
only a finite dimensional parameterization of the regression coefficient
~.
In addition. the estimation
metho~
presented below is applicable not
only in the survival analysis context. but also in the more general
3
context where N is allowed multiple jumps.
The histogram sieve was used
by Friedman (1982) in the survival analysis context for the purpose of
estimating A .
O
McKeague (1987) and Leskow (1987) also use the histogram
sieve for estimation purposes in multiplicative intensity model of Aalen
(1978) .
Section 1 contains a description of the statistical model wi th a
list of assumptions made in the following theorems.
(with a rate of convergence) is proved in Section 2.
a
functional
central
regression coefficient.
limit
theorem
is
given
Section 4 presertt-s
a:
Weak consistency
Next in Section 3.
for
the
integrated
consistent estimator of
and) t1ie!fasF'"sec'tion contains the
the asymptotic variance process.
technical details.
[;\(
-e
:"'::
,I
4
1.
Statistical Model
For
one observes an n-component multivariate counting
each n.
n
n
n
For
process. N = (N (1) .... N (n». over the time interval [O.T].
n
n
example. N ( i) might count certain life events for individual i. N is
~
~
defined on a stochastic base (nn.~n.{~~ : t € [O,T]}) with respect to
~n = (~n(l) .... ~n(n» where
which N(n) has stochastic intensity
~
~O
In the above. both
(Xn (l) .... Xll(n»
processes.
and
~
and A are deterministic functions on [O.T].
O
~
n
=
is1fved:thf";~f locally bounded. predictable stochastic
In
stochastic processes
having stochastic
=
(yllp)....
e~~h
yn(n»
9; values
takin
inte'~~ty '"An im~lies
is
a
vector
in {O.l}.
of
predictable
In this paper. ~n
that
is a local square integrable martingale with predictable variation.
<1t1?1(.i::.k:
i1'(j1)) = crt ;"An(i)ds
, •
'. ,[J;: t
'" 0
s
for
i=j
for i #- j.
= 0
Since the focus of this paper is on {30' inference for {30 is based on the
logarithm of Cox's partial likelihood (Cox. 1972).
n
~ (~)
n
=
};
i=l
IT
0
[
2n
e
Il( S )X~(i)
n
e
};
~(s)Xn(j)
s
y~(j)
]
dNll(i) .
s
j=l
A direct maximization of
~
n
({3) for {3 will not produce a meaningful
5
estimate.
For example, let
2n
be time independent and each component of
!'in have at most one jump, then if Rank(X~) = n and the jump of Nne i)
n
(3(TO)X (i)
occurs at
TO'
En[
e
] can be made as large as desired
n
(3(T )X (j)
O
e
n
}';
j=l
simply by increasing (3(T ) (Zucker and Karr, 1988).
O
In this situation,
the method of sieves (Grenander, 1981) is often useful.
Essentially an
increasing sequence of parameter spaces, say {9 , nL1}, is given so that
n
within each 9
9
n
there exists a maximum likelihood estimate, say (3 , and U
n
n
is dense in 9, where 9 is the
pat:?ID~.F,l3rsjJ<3..ce
of interest.
n
The
histogram sieve is used here,
e,,""
"... 'bx< )
K
= {(3
9n
·e
(3(s)
n b. I{s E I~}
i=l
1
L
= }';
.:: n
The (I~, ... , I~ ) are consecut i ve segment,S of
K
€ IR
n}
~
rp-.;T].
n
Defining, for each s
t
[O.T],
'3
Si((3,S) =
n
1.
~ e(3(S)X~(j){Xn(j,})i
yn{J~i'[h)~!,I)2,3,4"
S·'S
n'
J= l
consider the following assumptions:
A.
(Asymptotic stability)
1)
i
There exist S ((30's),
o (1),
sup
sE[O,T]
p
o (1), and
2)
3)
i=O,1,2, such that
p
there exist
~
>0
such that
6
sup
sE[O, T]
B.
sup
hEIR
Ib-J3 (s)
0
i=1,2,3,4.
I<'Y
(Lindeberg Condition)
For all €
> 0,
1)
C.
(Asymptotic Regularity)
1)
> 0,
There exist constants U
1
max{AO(s), Si(J3 ,S), i = 0,1,2}
0
SOWo,s) 2 U;;;
:2)
V(l30's)
VU30 , s}
D.
,;
~ ~
_
_ 80(130'5)
s
(130 ,s)
>0
~ U1 and
such that for
1:"
_
[~o({:Jo'S)]
,
S (/30's)
>;'2-j,.:r'"~:r,:;;,\"",
8(/30' s) AO(s)
such that
Lebesque on [O,T].
a.E:.
There exists a constant L
2
>0
U
2
2
,
l~
>~S~<f
a.e.
Lebesque on [0, T].
e·
(Bias)
1)
J3 {s) is U.pshitz of order 1 on [O,T].
2)
J3 i(s) has "bounded second derivative a.e. Lebesque on [O,T].
0
0
V(J30 ,s) SO(~O,s)
o
4) V(J3 ,s) S (~O"s)
0
3)
AOes) is continuous in s on [O,T].
AO(s) is Lipshitz of order 1 on [O,T].
In the following section. a member of 9
n
will be denoted ei ther by
K
its'
functional
form,
13 = (13 1 •... 13K ),
n
is pertinent.
J3(s) =
}; n J3.1.(s).
. 1
1 1
1=
or by its'
vector form .
It should be clear from the context which form of
The lengths of
denoted by P- = (I!~, ...• I!~ ) with
n
the K intervals.
n
i(l)'
l(K )'
n
13
I~ .... , I~ , wi 11 be
n
and Ill!n ll being the minimum
7
length. maximum length and the 2
2
norm, respectively.
Other definitions
are:
1)
E (~,s)
n
= Sn1 (~,s)/S0n (~,s),
2)
V (~,s)
n
0 2
= S2
(~,s)/S (~,s) - (E (~,s»
,
n
n
n
3)
5)
[O,T].
In the following, the superscripts1lfud sUbsc;i~ts. n, are dropped.
~O
-e
and
~O
are constant with
increasiI~
u.
Only
8
2.
Consistency
lOne way to prove consistency of the maximum likelihood estimator is
to expand the log-likelihood about the true parameter. say
use
a
fix.ad
point
theorem
Billingsley (1968).
as
in
Ai tchison
and
~O.
Silvey
However. in the problem considered here,
general. not a member of 9
for any finite n;
n
and then
(1958)
~O
which
is
close
to
~O.
instead
of
is, in
hence in the following
proof. the idea is to expand the log-likelihood about a point in
~~.
or
expanding about
~O.
en .
say
This
introduces a technical difficulty as the score function is no longer a
martingale but a martingale plus a bias term.
To the first order. this
bias term can be eliminzoted by proper choice of ~ as is given in the
previous se.:;tiorL
Assumptions D and A2 are then useful in showing that
the bias is asymptotically negligible.
Theorem 1.
a)
Assuflle
- .- . 1 0
lIm nl~J::~i
~
(Bias
0
~
0).
n
b)
·4····
lim nil211 ~
'"
(Y~~iance
converges). and
n
c)
then for
PROOF:
If
A.
~
c. Dl,
maximizing
~ (~)
n
in
en .
Recalling that L is defined in assumption C2. let
e-
9
~ co,
with probability going to 1 (as n
Aitchison and Silvey (1958), 3
~
1I.e1l
~
0),
then by lemma 2 of
8~. ~n(~)I~=~
E On such that
= 0 Vi
1
II~-~~II ~ 0n(II.e1l4n)-~ on a set of probability going to
and
2
8 -2
8J5~
1
~ (~»
1.
is nonpositive for each i, this proves the conclusion.
Since
Using
n
a Taylor series about the vector
+
(~0(1), ... ~O(K»,
_K
Y· (n .e.)
1
i=1
-e
+
sup
1~i~K
1
+ -2
,,2
(. °2 ~ (f3~» ([3. -(JO( i.)
n
1
v
8f3 . J I F
2
.;.'
1
-1 a2
2
I(n.e.)
1
8~.
sup
1<i<K
-1
gives,
n
-1
~ (~O) + .e.
21
a.
n :2'
II~-~O"'
n I l
1
sUP
~ EO
n
1I~*-~~II<II~-{3~1I
I(n
.e.)
1
3
-1 8
*
- 3 ~ (~ )
8~. n
1
I
n 2
- L11f3-~0"
10
4
Consider (3 E an' where 1I(3-(3g112 = (11211 n)-1
o~,
2
+ Op«vIllIJ?1I )-1) + 0p(l) - L +
then, by lemma 1,
Op(l)II~-~II}
4
(since (11211 n)
Since Hm
-
?
o·~
n
n
-~
0
n
L 0).
> 0, '
r
~II(3-(3~1112if]1
+
~p{h]!4?Ji\~O)-top,(!!J!1!4)
2
+ 0p((../ll 11211 )-1)
...
I'
It is obvious
P[
_K
x·
i=1
e-
tl~t,
(n 2.)
1
-1
fer e
a-an
> 0,
~ (~)
n
Pi
such that for n >
- nc
3 nc
((31.-(30(i»
<
°
V (3 E a
n
Notes to Theorem 1.
1)
Assuming D4 and lim nll211
n
6
<
00,
results in
fb
(~(s)-(30(s»2ds
=
11
4
0(IIEn ).
Since
TAn
f o(J3(s)-J30 (s»)
2)
2
nllill
1I~_~1I2
4
=
0p(1)
2
ds = 0p(l), one gets (nllEII )
T
fo
implies
JnEII 4 n
&~1 (~.1 -
this
~ (&~ (~. 1
expects
(so
2
IIEII =1/K).
It
To see this.
turns
out
that
J3 (i)). i=l, ...• K. behave asymptotically like N(O.l) random
0
variables;
i=l
each i
1
2
from Theorem 1.
In general this will not be possible.
and i.= VK for
let T=l,
nnEn
A
~
(J3(s)-J3 (s))""ds = 0p(l).
0
It is natural to question whether the rate
can be improved.
that
indicates
that
the
approximate
J3 (i)))2 is chi-squared on K degrees of freedom.
0
1
~ (&~ (~. - 130 ( 1)))2/K ~>
that
distribution
i=l
1
1.
of
So one
This can be proven
1
rigorously using lemmas 1 and 3.
Since a~ = '0 (11K).
1
p
this gives the
Other norms '; mi~hrt allow:,
for different
rates.
; ,;i
i~'"1 ,;"
.
For example. using the
l~K °i I
JEn(K)
J3 0 (i)
3)
above intui tive reasoning. it" is expected that
I
,f3 i
-
= 0p(l).
To understand why the choice of J3~ given above eliminates the bias
to a first order. consider the following:
., F,'
~
Maximizing
n
(13) is eqUivalent to maximizing.
n
1 L
n i=l
for 13 €
(2.1)
e.
n
fb
(J3(s) - J3 (s))X (i)
0
s
This is "asymptotically" like maximizing
12
(under suitable conditions)
But the {3 maximizing the RHS of (2.1) is given by {3~.
Therefore it is
natural to expect that for the maximum partial likelihood estimator. {3,
tAn
2
the convergence of f ({3(s)-{30(s)) ds to 0 will be of a faster rate than
O
for choices of (3 €
4)
an
other than ~.
Further consideration of (2.1) lends substance to the use of the L
2
norm in. proving consistency.
Usually in the method of sieves.
the
Kullback-Leibler information (in this case. (2.1)) determines the norm
in which the
(1981) ,
maximumlikelrtho'lJd~~'estimatorconverges
Geman K Hwang;'.s.'( 1982) ,'and Karr
(1987)).
considered here.':ti1e2','l..~";tinorm,; approximates.
Kullback-Leibler '. in:faTma;Uom;EF::"·.)i .
to {30 (see Grenander
In
the
situation
to the first order.
the
13
3.
Asymptotic Normality
In order
function.
/30'
to
conduct
inference about
it is useful
the
regression coefficient
to consider some sort of weak convergence
result for {3.
However. in this case and in other situations where the
parameter
interest
of
is
Ramlau-Hansen.
1983)
asymptotically
independent
means
that
the
a
function
normalized
versions
normal
limi ting distribution of f3
functional central theorem.
type
distribution.
of
/3
as
will
integrated
Karr
statistic
which
1985;
of
/3{t)
distributions.
complicates inference using /3 taken
supremum
(Karr.
is
Leskow.
and
1988;
/3{s)
have
Intuitively.
this
"whi te noise."
This
as a. function, as this excludes a
(1ge5)~Gil':Cumvents
r.l8s
this by giving a
'IDfasymptotic
extreme
value
Another possibili ty is: tocons,ide:E'J;;"hn integrated version
be
version
done
and
below.
then
McKeaguG)"(19R7) , also
proposes
the
use
of
a
considers
supremum
statistic based on the integrated estimator for inference purposes.
an
type
One
i.e.
weight<e0.
intregrals
of
/3.
might
also
consider
various
T
'
A
f o wn {x){/3n{x)-/30{x» dx as is done in Aalen (1978) ffi1d in Gill (1980).
In a later paper. issues involving inference will be addressed.
In the following.
the existence of a sequence of esti.mators (/3 E
n
Sn) is assumed such that
Theorem 2.
a)
lI~n-/3~1I = 0p{l).
as n -+
00.
Assume.
8
lim nll£1I
=0
(Bias -+ 0).
=00
(Variance converges). and
n
b)
lim nll£1I
4
n
£
c)
A, B. C. D2. D4. lim
n
~ < 00.
G(l)
14
then.
wherle G is a Gaussian martingale wi th GO = 0
a. s .. and
PROOF:
Using assumptions D2 and 04. it is easily proved that
r
sup
vn
t€(O.T]
To show that
r
vn
So"
"
~n(s)
t
n
~
4
fO
{30(s) - {30(s)ds = O(n 1/21/).
w
n
=>
~O(s) ds
-
G
consider the following Taylor
series:
ewhere
"2
Define
°i
1
a2
3
1 a
*"
.
+ --2 -3 ~ ({3 )({3·-{30(1)).
n 8{3. n
1
= n -2 ~n(~)
a{3.
1
-1
P[ min
that
2.
1
l~i~K
t
..;n fa
..;n
"
({3n(s)
(i\-{30(i)).
results in.
-
n
Lemma 3 impl ies
1
"2
o.1
>
{30(s))ds
L
-2]
~
on
this
multiplying by
1
so
set
lies)
it
is
only.
and
sufficient
to
Therefore.
solving
integrating
from
consider
zero
for
to
t
15
(3.1)
1
_K
t
1
f . X-1 1.1 (s) -2
,- o 1=
vn
0
+ -
i
a
-a{3 .
1
n
'£ ({3o)ds
n
To show that the first term on the RHS of (3.1) is op (l) in sup norm
consider.
-
_K
4G 1]2
2:-2 - - '-1 i "'2
2
1-
'.
:J.
2·
2 rK
) [0 [
n
~
.,2.
(;(1)...
... , .
O.
1
' ..'.
1'\J·
p 11"11 4
1(;
n
(by lemma 1)
6
• n[o [ 1 4 ] + 0 (11211 ) + 0
P 11211 n
p
P n
(1_)]
= 0 p (1)
(by 1). 2). and lemma 3).
As for the second term on the RHS of (3.1).
t
-K
= .In
-1 f O
X
.1=11
1
n
I.(s) -2 2
1
0i
. 1
J=
n
f TO I.(u) (X (j) - E ({30'u»
1
U
n
dN (j)ds .
u
16
Let,
Z
n
2:
t i l' J
l=
' 1
= _1
rL. i<1 1. (s)
t
fo
1
1=
1!2i ] (X (j) - E (f3n ' s)) dN (j)
S
n O
s
a
i
Using McKeague's (1987) lemma 4.1, one gets that i f Z
w
=>
G,
then the
second term on the RHS of (3.1) converges weakly to G.
Now,
Zt =
~ fot [i<
. 1
. 1
L.
vh J= .
+
i
1.(s) 1!2](X (j) - E
1
S
n
a
1=
(~o,s))
dM (j)
s
i
~ J~ [i~
Ii(s)
:;J(En(~o'S) - En(~'s))S~(~O's))\O(S)
ds.
1
By lemma 4,
the
:s~~<lnd ~Ert'.mof
Zt is 0p{l) in sup norm.
As for the
first term, the idea is to use the version of Rebolledo's central limit
~·v
t
_
: '<
,-
'.
theorem in Andet~ah '& "'Gili<' {,19S2)"
Call
the first
term of Zt' Yt'
Since
e·
and max
l~i~K
sup ll!~
Ii)·
ai -
St!'
V(f3
0
,s)S (f30 ,S)AO(S) I
0
~
° (by
the continuity of
1
V({30,s)SO(f3 ,S)A (S) in s), one gets, using Al and lemma 2, that
0
O
A Lindeberg condition must be satisfied also; that is, show
f
T 1
n
2: (X (j)
On' l
s
J=
-
- E (~O,s))
n
2
e
f30 (s)X (j)
[_K
1!.]2
1
s
Y (j)AO(S) yo r.(s) 2
s . 1 1
1=
ai
17
*I{s: IX(j)-E(l3nO.s)l>c.JD.
s
is
0
p
n
for each c > O.
(1)
2
}ds.
[ ~I.(S) 2.]-1
1
. 1
1=
1
G.
1
-1 2
Recall that min 2 G
i
i
i
LL
so the Lindeberg
condition will be satisfied if.
T
In
2 130 (s)X s (j)
l
f O - zll (X (j) - En(130's» e
YS(j)AO(S)
(3.2)
n j=l
s
*
=
0
P
(1)
V c
I{s
Ixs (j)
- En (I3nO's)
I
> cvh}ds
> O.
The LHS of (3.2) is bounded above by.
4
f To 1..
n
<4
-
13 (s)X (j)
~n Xs (J') 2 e 0
.L.
J= 1
fT
0
1..
n
s
Ys (.)
""T" <"" ."" "I""' r .) I
J .."'0 ( s ) "··fs
o_'.''"'\s;'o,J i
> 2'c
r} d s
vn
n X (j)2e 130 (s)X s (j) Y (j)AOCS) I{s : Ix (J) loY (j)
2:
. 1
s
J=
+
0
p
(1)
s
by AI. C1. and
So the LHS of 3.2 is
= 0p(l)
{by B and AI).
. s
lel11if.1B.'-i 2 . (
.
s
> -2c .JD.}ds
18
4.
~
Consistent Estimator for the Asymptotic Variance Process
Theorem 4.1.
Assume,
4
~
a)
nillfll
b)
AI, A3, C, Dl, D3, lim
00.
and
-E <
n
00,
t;(I)
then,
_?
-1 ar
sup IfO ~ I.(s)(f.n)
--2 ~ ({3)
[ i=1 l I a R . n
O<t<T
- fJ
t
A
]-1
0
t
ds - f O[V({30's)S ({30,S)AO(S)]
-1
dsl
1
= 0 p (1).
PROOF:
(4.1)
•
sup
o
O~s~T
_K
Y-
[V({30'S)S ((30,S)AO(S).
I.{s)(f.n)
i=l 1
1
-1
a2
--2 ~
a{3. n
A_I
((3)]
1
Consider the first factor on the RHS of (4.1),
~
sup
O~t~T
Ifot
+
_K
Y- I.(s)
i=1
sup
1
O~t~T
,,2
]
(f.n)- 1 [cJ2
--2 ~ ({3) - ~2 ~ ((3~) dsl
A
1
a{3.
1
t
If O
_K
B{3.
n
Y- I.{s)[{f.n)
i=1 1
1
1
-1
2
a
--2
a{3.
1
~
n
n
-1 2
«(30) - f. a.]ds
nIl
I
e-
19
The second term above is
OStsT
IfOt
p
(1) by lemma 1 and the third term is a (1) by
P
V(~o.s)SO(~O.S)AO(S). As for the first term above.
the continuity of
sup
0
_K
1
2- r.(s) (E.n)-
i=1
1
1
2
[a-:2
~
ap~
1
n
a2
(~) - aR~ ~n(~~) ] dsl
A
~
1
where
= opel)
by A3. lemma 2
alia.
the fact that
II~-~~II
g
o.
n~'
f
,
That the second factor in (4.1) is 0 P (1).. can be proyed by lemma 2. a).
and A3.
o
20
5
Appendix
Lemma
1.
Assume,
E
a)
lim ~ <
n
b)
00,
and
E(l)
A, Cl, Dl,
then,
6
(_1.lo;h.j·'!O:,(IIE!l ) + 0 (l),
'pt .. .,j;>-.. p'
p n
, +0
.
(5.1)
= . ~1 «n 2.)-1
~ fTO 1.(s)[X (j)-E (~no,s)J dN (j»2
1
. 1
1
S
n
s
1=
J=
~ 2 ~ «n E.)-l
'1
1=
1
+
Consider
2
Ln fT 1.(s)[X (j)-E (~nO,s)J dM (j»2
O 1
S
n
s
'1
J=
_K
Y-
i=l
-1
(2.
faT 1.(s)[E
l I n
n
(~o,s)
21
Zt =
_K
~-
((n E.)
i=1
1
-1
n
t
..Jl
~
f O r.(s) [X (j)-E
j=l
1
S
n
(~O,s)]
dM (j»
2
S
The compensator of Z, is
_K
C
2
2_
~- Cn E,)- 2? fot r,(s)[x (j) - En(~no,s)]-Ys(j)e
t = '1
1
'1
1
s
1=
J=
= Sot
_K
i=1
-2
2- r.(s) E,
1
n
-1
1
2
n
1
~O(s)X (j)
s
A.O(S) cis
n
[8 (~O,s) - 2Sn(~O's) En(~O's)
To show that Zt has the same limit in probability as its' compensator
C , it is sufficient (by Lenglart's inequality, Lenglart (1977»
t
that
the
quadratic
probability,
variation
of
goes
Denoting the endpoints of
defining M*(ai.s)
= 2
,2?1 nJ=
4
optional variation of IIEII n(Z-C)
·e
1
iTIterva~li
zero
in
bya i and a + , and
i 1
J:~'E~J[k~lJ}'}~n({j~,u)]
1
to
to show
dMu(j) ,
the
-
is.(KQ.itP,r]:~S1;,_pg. 148),
,_. ~,.:.
',,'
,,_.-
i ;
n
2 -2 -2
,
n
4 -4 -4
• [Xs (j) - En (~O,s)] n E,1 + [X s (J)-En (~O,s)] n 2.1
4
Then the compensator of [IIEII n(Z-C)] is given by,
*
{ M (a"s)
1
2
n
-1 -2 [1
n
n
2 ~O(s)Xs(j)
]
E.
- 2 (X (j)-E (~O,s» e
Y (j)
1
n'l
s
n
s
J=
22
+ n-3 2~4
1
1 ~ (X
[n 'J= l
~ (s)X (j)
]
s
Y (j)
(j) _ E (~no,s»)4 e 0
s
n
2~3 [1
l I n
'2-n (X (j)
+ 2 M*(a., s) n -2
4
= 11211 n
sot i!.
i=l
. 1
J=
s
3 ~o(s)Xs(j)
n
- En(~O's»
s
e
Ys(j)
]1
SAO(s)ds
1. (s) M*(a., s)2ds 0 (1)
l I P
+ n -1 0 (1)+ 11211 2
P
_K 1. (s) IM* (a., s) Ids 0 (1)
Sot 2;""-
i=l
I I p
by A3, Cl.
Now,
sup
max
IM*(a. ,s)
~
1<i<K
sEI.1
- -
I
~ ~,sup
l<i<K
- -
IM*(O,s] I +
sEl.1
max IM*(O,a.]
l~i~K
1
I
eand using Lenglart's
: 1:
~
c
ineq~lity
r'.?> :. (;!; (::J' ,::C'. '..
(1977) for B
,"
> 0,
for B large and n large (use A3, C1).
23
Therefore
11211
2 f t Y"I.(s)IM(a
_K
* .. s)lds=O(n-~ ).
O i=I I I p
"
COnsIder
the process.
f O•
*
_K
2
Y- I.(s) M (a .. s) ds
i=l
1
•
1
_K
- fO
n
Y- I.(s) ~ n
. 1 J.
• 1
1=
J=
is a local martingale.
B
-e
> O.
E.
_K
YI.(s)M* (a .. s) 2 ds for t belonging to
i=I 1
1
f ot
-2
-2
2.
1
f
s
a.
1
[X (j)u
Therefore by Lengla:rt s inequali ty (1977) for
0
> O.
(5.3)
- 2
~ E.
Therefore <11211
(5.2). (5.3)).
4
du ds
~
B
~
]
for B and n large (use AI. CI. and lemma 2).
n(Z-C»T = 11211
2
0 (1) + n
p
-1
O~t~T
4
11211 n
-~
0 (1) + n . 0 (1)
p
p
This, as mentioned earlier. implies that
sup
Since.
En(f3~'U)S~("/30.:u)]i\o(u)
Iz t -C t I = 0 P (1).
(by
24
= 11211
4
sup
lIill n
a~t~T
IZ t -fat
4
_K
-1
Y- 2.
i=l 1
0
p
by
(1)
Al and CL
-2 -1
a
I
V(~a's)S (~a.s)~a(s) ds = 0 (1).
I I p
_K
Y- I.(s) i. n
i=l
This concludes the proof for the first term on the RHS of (5 . 1).
Consider the second term on the RHS of (5.1).
_K
Y-
i=l
Let IIxll
2
=
-1
(i.
T,.,n
fa I.(s)[E (po's) - E
lIn
_K
n
a
(~o.s)Js (~o.s)~O(s)ds)
2
n
"
-1 fT
n
0
0 I.(s)(~O(s) - ~O(s»(V (~O·s)S (~O.s)
l I n
n
Y- (2.
i=l
and.
lIyll2 =
i~ (i~l J~ Ii(S)(~(S)
-
~0(s»2S~(~0's)~0(s)
ds)2
Using a Taylor series {or fixed s yields:
where
Then. since
sup I~~(s) - ~O(s)1 = 0(1).
O~s~T
~ 211xll 2 + 211yll2
It turns out that, IIxll
2
= 0
C!")
p n
6
and lIyll2 = 0(11211 )
so that the second
25
6
term on the RHS of (5.1) is equal to 0 (l) + 0 (IIEII ).
p n
p
By A, C1. and
D1, one gets.
=
I(E.1
n)
-1
)2 0 (1)
n
1
1
= 0p(-)
n'
2)
JI
iK (e~
i=l
p
and
a2
-1 2
E. a.)
n
~ (~O) +
--2
a~.
I
nIl
1
T
n= IE.-1
f O I.(s)
Vn (~O,s) dN s (.)
1
1
-e
~
sup
O~s~T
+
+
n,
I
-1
(~O,s) - V(~O,s) max E.
IVn
i l
max E.-1
l~i~K
sup
O~s~T
If T
O
1
I.(s)V(~O's)
Iso(~O,s)
n
1
- S
0
WO's)
f TO L(s)
1
ill'\! (.)
S
dM- (.) I
s
I.
-1
max E.
l~i~K 1
T
f O L(s)Vi(~O.s)A.O(s)ds
1
So by lemma 2 and C1.
max
l~i~K
Ice.
1
n)
-1
a2
-1 2
--2 ~ (~b) - 2
a. 1
1
a~. n
1
1
=0
r
p
2-1
((vn IIEII)
) +
0
P
(1)
26
3)
max
1~i~K
I(n E.)
~up
f3 E8n
3
--3 ~ ({3* )1
8{3. n
-1 8
1
=
1
n
1I{3*-{30"<.5-r
max
1~i~K
~up
1
S 3 ({3* ,s)
T
~O r.(s) [ ~ *
lIS
({3 ,s)
IE.-
{3 E8
n
n
1I{3*-~1I<.5-r
By assumption A3, Cl, and lemma 2, the above is,
o
Lemma. 2.
Assum~
AI, A3" Cl, and Dl,
then,
max
1)
1~HK
2)
sup
O~s~T
1J6
dMs (·) I = opeL), and
Ii (5)
,
iS~U~~,s)
r
S~({3o,s)1
-
e.
..;n
= 0pCE(K»
i=0,1,2.
PROOF:
1)
Let B
max
1~i~K
Using
>0
and consider,
Ivh f To
1. ( s)
1
the version of
dM (.) I ~ 2 sup
s
Rebolledo's
t€[O,TJ
central
Ivh M (0) I.
t
limit
theorem present
n
,- -
in
n
Anderson and Gill (1982) it is easily proved that for Zt = vn Mt(o), Z
converges
weakly
t
0
fO
S ({30,s»)\.0(s)ds.
to
a
Gaussian
martingale
with
variance
function
An application of the continuous mapping theorem
(Theorem 5.1 in Billingsley, 1968) suffices to prove 1.
27
2)
Fix s, then using a Taylor series about (3~(s) results in.
where
i=0.1.2.
Therefore.
=
sup
O~s~T
l(3o(s) - f3~(s)1
= 0p(2(k))
Lemma 3.
Assume A. Bl, Cl, and lim
° (1)
(by A3)
p
o
(by D1).
2 K
'"
.::.Dil < 00.
2(1)
then.
_K
2 "'2 2
4
1
.2
. '" n 2
Y- (a.-a.) = 11211 [0 (--~. ) + 0 (1I.e,1 ) .... O(lIf3-(3011 )].
i=l
1
1
P 11211Y.n
p
-e
PROOF:
+ 2
_K
1
a3
Y- ( - i=l 2n a(3~
~
n
'" _. P (i)) 2
((3* )({3.
1
0
1
_K
2 "'2 2
y- (a.-a.) ~ 4
'111
1=
+ 4
_K
T
",n
2
y- (f 1.(s) V (PAO'S) dM (0)) +
O 1
n
s
'1
1=
_K
T
n
0
0
Y- (SO 1.(s)[V ((30.s)8 ((30's) - V((30.s)8 ((30·s)]AO(s)ds)
. 1
1
n
n
1=
2
28
2
+ .5E(K) 1I{3-~01l
A
2
max
1<i<K
sw
{3 €9
I(E .n)
n
-1 8
3
* I2
-3 ~ (f3 )
8{3. n
1
1
1I{3*-{3 1I<.5'Y
n
Using lemma 1 results in,
_K
z-
i=1
2 "2 2
(a.-a.)
1
~
_K
z-
4
i=1
1
+ 4
-1
(E.
T
n
2
4
SO I.(s)V ((30's) dM (.]) O(IIEII)
lIn
s
_K
-1 T
,.,n
0
Z- (E. SO 1. (s)[V (PO,s)S W ' s)
O
l I n
n
i=1
Using Lenglart's inequality (Lenglart, 1977) it is easy to show (using
lemma 2) that __
e... L
c,;.);
All that is left is to prove that
=
(5.4)
_K
z-
- -1
(E.
.11
1=
ST0
n
0
2
I.(s)[V ({30's) - V ({30,s)JS ({30,s)A (s)ds)
O
1
n
n
n
29
..
Using lemma 2. it is easy to show that the first term on the RHS of
(5.4) is 0p (IIell 2 ).
The second term.
can be divided up into terms such as
_K
x(e.-1 IT0 I.{s) IS j (~o.s) - S j (~o.s) Ids) 2 0 (1)
i=l
j
= 0.1.2,
p
l I n
by lemma 2. and the fact
tbatinf.s~{p{j's) > O.
(j~s~T
The proof will be concluded if for j
.
0.1,2.
~
The above LHS is less than or equal to.
e
Using A2. and lim
n
Lemma 4.
then.
~ < 00 yields the desired result.
(1)
_ erK \
Assume A. C. D and lim
~
e{l)
< 00.
o
30
PROOF:
Using a Taylor series on ~~(s) about ~o(s) at each s results in
I~(s) - ~o(s)
where
I
~ I~O(s) - ~~(s)
I
and subsequently for
I~o(s) -
~~(s)I<')',
sup
O<s<T
I~)
sup
~(s)Effi
a~(s
V
n
(~,s)1
= 0 (1)
p
by A3.
1~(s)-130(s)I<')'
(by A. e1).
Using the d,efini tion of !3~ it is easy to see that the second term above
4
is 0 i(.Jil lIeII1 ).
p
~
sup
O~t~T
+ sup
O<t<T
-
As for the first term,
Iv£" J~ (~O(s)
Iv'il f Ot
-
~~(s»ds I
n
I~O(s) - f3 0 (s)
-
The first term above,
sup
O~t~T
I[
e
i
_K
0
X- L(s) -2 V WO,s)S WO,S)AO(S) -
i=l
Iv'il J~
1
a.
1
n
]
1 ds.
n
(~O(s) - ~~(s»dsl. has already been
31
shown to be o(v'illl 2 11 4 ) . The second term is equal to,
2
T
oJ
i:; lies) 2: Ids
0
[ _K
0(1)-v'il f O I~o(s) - ~(s)1 IVn(~O,s)Sn(~O,S)AO(S) -
a
~ 0(1) vh Jbl~o(S) - ~~(s)1 IVn(~o's)S~(~o's) - V(~o,s)So(~o.s)IAO(s)ds
+ 0(1) v'il
fbl~o(S)
-
~~(s)1 IVn(~O,s)So(~O'S)Ao(S)
r
2 ) f TI Vn(~O,s)Sn(~O's)
0
= 0p(vnIl211
O
+
° (v'il1l211 4 )
P
-
V(~O,s)
2
- [i:: lies)
::J
S0 (~O,s) Ids
by D4.
Using A2 and C1, results in,
f TO
I Vn ((30' s )O(
Sn ~O. s ) -
c"
0
VCR
s')~'
fJO's )8 P,v
~C:lS
v
:::::
ri'il~. .
vY-'.p.
"\t 1
o
Ids
© Copyright 2026 Paperzz