Thavaneswaran, A. and Habib, Muhammad K.; (1987)Recursive Parameter Estimation for Semimartingales."

.e
Recursive Parameter Estimation for Semimartingales
by
A. Thavaneswaran
and
•
Muhammad K. Habib
•
Department of Biostatistics
University of North Carolina at Chapel Hill
Institute of Statistics Mimeo Series No. 1819
March 1987
Recursive Parameter Estimation for Semimartingales
•
A. Thavaneswaran and M.K. Habib
•
Department of Biostatistics
University of North Carolina
Chapel Hill, NC 27514 USA
.e
A recursive estimation algorithm is presented for a semimartinga1e model
based on the theory of optimal estimating functions. Strong consistency and
asymptotic normality of the recursive estimate generated by the algorithm are
established. This recursive algorithm may be used to handle the problems of
missing observations and censored observations .
•
•
AMS 1980 Subject Classification: 60G 60K99,62F99
KEYWORDS AND PHRASES: Asymptotic normality, consistency, optimal estimation,
recursive estimation, semimartinga1es
Research supported by the Office of Naval Research under
contract N00014-83-K-0387.
1.
Introduction
This paper deals with the problem of recursively estimating a
p-dimensional parameter, 8, that occurs in the predictable part of a
semimartingale model.
Semimartingales have been shown to serve as models of
observable stochastic processes occurring in many applied areas such as
economics and finance (Aase, 1985) and neurophysiology (Habib and
Thavaneswaran, 1986).
Off-line estimation procedures using the method of
•
optimal estimation have been analyzed in Thavaneswaran and Thompson (1986),
quasi-likelihood has been treated in Hutton and Nelson (1986) and quasi-least
squares has been investigated in Christopeit (1986).
Aase (1981, 1982, 1983)
has proposed and studied the recursive estimates for several regression type
models in the i.i.d. and non i.i.d. setup. Following Aase, (1982)
Thavaneswaran (1986) proposed a recursive estimate for a nonlinear counting
process model and studied its asymptotic properties.
In this paper a
recursive estimate of B based on the theory of optimal estimating functions is
derived and its asymptotic properties are studied for multivariate
semimartingales with multidimensional parameters. This approach enables us to
A
study the recursive estimation for Ito-Markov models as a special case and
~o
A
our knowledge the recursive estimation for Ito-Markov models has not been
treated earlier.
The resulting algorithms are related to the Kalman-Bucy type
algorithm in the special cases.
Advantages for using the recursive estimate
over its off-line version are also inclUded.
A semimartingale is a stochastic process which can be represented as a
sum of a process of bounded variation and a local martingale.
In the case
of continuous time processes, a typical example of such a process is a process
(Xt,t~O)
with independent increments for which Elx , is finite and a function of
t
locally bounded variation. The class of semimartingales includes point
processes (counting processes, Poisson processes, extended gamma processes,
A
A
branching processes), Ito processes, Ito-Markov processes, diffusion
processes, etc.
In Section 2, the problem is formulated and in Section 3,
Godambe's optimality Theorem is proved.
Sections 4 and 5 deal with recursive
estimates and their asymptotic properties.
2
e.
2.
Notations and Conventions
We assume that all the semimartingales that will appear in the sequel are
defined on some fixed complete probability space (O,F,P) for each P in a
We also assume a family F-{F , t~O} of
t
a-algebras satisfying the standard conditions (F S F S F for sSt, F is
O
t
s
augmented by sets of measure zero of F, and F - F ,where F
F). We
t
t+
t+
s
family {P} of probabilty measures.
n
s~t
II
denote by D the space of right continuous functions X - (X , t~ 0) having
t
limits on the left and use X - (Xt,F ) to denote an (F } adapted random
t
t
process (X ) with trajectories in the space D. Assume that the process
t
X-(Xt,F ) is a semimartingale for each P, that is, for each P it can be
t
represented in the form
(2.1)
where V-(Vt,F ) is a locally bounded variation process and H=(Ht,F ) locally
t
t
square-integrable martingale .
.e
We assume that V is an absolutely continuous process of the form
t
V
t
Jt
-
o
where 8
f
f
s)
d~
s
R~ and f is some other obserVed process which is assumed to
be predictable.
The nonrandom parameter 8 i~··is:sumed to be unknown and it
will be estimated on the basis of the observations X and ft' A ~ (At,F )
t
t
is a real monotonically increasing right continuous process A
O.
Hence,
O
the model (2.1) takes the form
•
f s,8 dA s + Ht, 8
•
We further assume that < H >t , 8
_
t
Jo
b s, 8 dA s where
(.A
t
},
(b t, O} and (f t } are
predictable processes with respect to (F ). This is analogous to a regression
t
model where only the form of the dependence structure among the error terms
are specified.
3
3.
Godambe's Optimality Criteria
Following Godambe (1985), consider a parameter e to be a function of p e
P (where P is a family of probability measures).
Let G(X,8) - (G (X,8), Ft )
t
represent.a family of processes indexed by 8 such that Ep G (V,8) - O.
t
This corresponds to the unbiasedness property of Godambe's (1960) optimality
criteria which, adapted to this situation, says that GO is optimal in L - the
class of unbiased estimating forms if Q -
~
-
~o
is nonnegative definite for
all GeL and for all P, where
,
h(X) _ E { [ :;
h ° (X) - E
and
~
•
],1 }G(X.6),
{ [ ae
aGO]
-l}
G° (X,8),
is the variance - covariance matrix for h under 8 ,
0
The following lemma states a sufficient condition (due to M.E. Thompson)
for optimality.
(c.f. Thavaneswaran (1985), p.57)
Lemma: - G° is optimal in L if
K E [ GGO
T
e.
]
for all GeL.
,J
where T denotes the transpose and K is a constant matrix.
Let Xt - It f s ,8 dA s + Ht,e and e e
aP .
Then it is natural to look for
o
p - dimensional estimating functions of the form·
t
Gt,8 -
generated by (a
Io
a s, 8 dH s, II.II
,
•
.II)'
S,ll
Then
t
- E
Io
oT
a s, II.II d< H > s, e a s. (J
where T denotes the transpose, and
4
t
Io
aO
5,8
dH
5,8
Hence
a
•
b
5,8
aO T
5,8
dA
5,8
5
Moreover,
•
E [
aG at8,8 ] _
It
o
E
a s ,8
alae
(dH
5,8
)
t
- - E
Jo
a
5,8
fT
d>'
5,8
where • denotes the derivative with respect to 8.
s
Hence the optimal
estimating function is given by
t
Io
provided that b+
5,8
iT
s,8
the inverse of b
4.
n
s,u
b+
dH
s,8
s,8
exists.
Recursive Estimation
L·
A
Optimal estimator 8 satisfies the equation
-
•
Io
t
a
0
~
s,8
(dX
s
~
s - d 5,8
AX ) -
s
5
0
If G is a smooth function of both 8 and t, it follows from the implicit
A
function theorem that 8
t
satisfies the equation
we have
5
A
d8 - k a ° A
t
t t.8
t
[
dX
0
t - f t , t d'xt
]
where
k~l
A
G~
Jt
A
(8 t) -
a ° A fT A d'x
5,8 5,8
5
0
provided that a °
is independent of 8.
5,8
Note: In general aO 8 is independent of 8 for models having linear intensity
S,
0
and having the variance process of the martingale independent of 8.
When a
depends on fJ, this corresponds to the approximation obtained by equating
the martingale term
[
(c.f. Spreij (1986), p.284).
a°
~]AdH
a8
fJ
s,fJ
A_O.
Furthermore,
° a 5,8
implies that
aO A b
s,fJ
A aO TA d,X
s,fJ
s,fJ
s
Thus we have the following recursive algorithm for the multivalued
semimartingales and multiparameter case:
and
6
s,fJ
•
Example 4.1:
Now we consider the following model as in Aase (1982).
this corresponds to the case where
f
•
s,ll - G + HII,
a(sl~)
dW .
s
s
A - sand H
t,ll
s
H~(aaT)-l which is independent of II and
Hence a
t, II 0
a
)e
0
t,ll
b
0
t,1I a t ,1I -
H~(
aa
T
r
1
H
t
The recursive estimates are given by
"
dll
T
T
aa
t - kt H (
)-1
[dX
"
t
- (G+Hll ) dt]
t
which leads to the same scheme as in Aase (1982) motivated somewhat
differently using filtering theory.
More interestingly if a depends on II then
the likelihood cannot be defined, as the measures induced by X and W become
t
t
singular. However the same algorithm can be used to obtain the recursive
optimal estimate.
If we take lit' the signal process in the filtering setup as II t - 6, and
assume that 1/ has multivariate normal distribution then the algorithm obtained
here is the same as the nonlinear version of the Kalman fil ter.
Shiryayev, 1978)
7
(Liptser and
Examp1e~:
Now we consider the recursive estimation in the following
counting process setup as in Spreij (1986, p. 281).
dX
t
then this corresponds to the case with r-1, At - t,
f t , 8 - fT (X t )8, is
M , purely discontinuous martingale and
t
predictable H
T
t,
b t , 8 - f (X t
Hence a
o
t,8
_)
8.
•
and the recursive algorithm turns out to be
Kt fT(X ) ;tt
dk t -
(dX
"
- fT(X _) 8 dt)
t
t
t
-ktf(X t ) fT(X t ) ktdt
fT (X
t
)
; t-
which lead to the same algorithm motivated somewhat differently in Spreij
(1986).
It is of interest to note that the recursive estimate obtained by the
method of least squares is not efficient while the one obtained by the optimal
estimation technique turns out to be the approximate maximum likelihood
estimate and hence is efficient.
Furthermore, the same algorithm may be used
T
as long as < M >t - f (X )8 and we do not need to impose any regularity
t
conditions for absolute continuity of measures induced by X and the standard
t
Poisson process.
Application of this algorithm in survival
a~alysis
with censored
observations and to a random periodic intensity model together with the
connection to Poisson-Gamma filter see Thavaneswaran (1986).
Example 4.3:
Next we consider the Ito-Markov model of the form
ds + Jt a(Xs)dW
o
s
+ Jt
0
J
z
C(X ) q(ds,dz)
s
where f,a,c are predictable processes W is a standard Wiener process, q is
the martingale measure corresponding to a Poisson process P(ds,dz) given by
q(ds,dz) - P(ds,dz) - ds
~
a(dz) with a(dz) as the Levy measure of P.
8
e,
This corresponds to our setup with At - t, f
t
- Ht -
Hence a
o
s
Jo
a(X )dW +
s
s
s,B
JotJz
- f(X )B,
s
C(X ) q(ds,dz)
s
fT(X )b+ and the recrusive algorithm is given by
s
s
•
)
Since a
o
s,B
is independent of B, this gives an exact algorithm.
The fact
that the estimator depends on a and C is not a serious restriction because
of
Levy's result for Brownian motion and its extension the conditional variance
b
t
can be estimated nonparametrica11y with variance as small as we please for
a given data stretch in continuous time (see Brown & Hewit (1975)) so it can
be assumed to be known.
5.
Asymptotic
Properti~s
In this section we establish the str0ng consistency and asymptotic
normality of the recursive estimate for
a semimartingale
model with linear
intensity.
Strong Consistency.
Suppose we have
dX t - f(X,t)9dA t + dHt,B
)
Then the estimate 9 , k the gain matrix satisfying the recessive form
t
t
and
can be written as
9
-1
[ 80 k O +
dX
- f(X )/Jd A
s
s
s
)]
with
a 0 "b "aoT ) dA
s,8 s,8 s,8
s
or equivalently,
(
a0
b
s,/J
•
_
Io
t
Let M
t
[ (80 - 8) + k O
o "
a s,
8 dH s, 8 then, < M >t
Ita
o
"a oT" ) dA
5,8
Io
5
"dH
]
5,8
5,8
t
o
5,8
"
as,/J
b
o
" aO T "
5, /J
5, /J
dA ,
5
In general b ,8 is independent of 8 and < M >t does not depend on
s
Hence for any increasing function T(t) of t, we have
< M >t ] -1 [ 80 - 8
I
8t -
/J
+ [ T(t) + k O T(t)
Theorem 5.2 - Let 8
lim inf
t ~
< M >t
T(t)
~
The lim
"
/J
t-+~
t
-
- 1:
E
T(t)
open subset of laP.
a
< M >t
+ k O T(t)
through b.
(5. 1)
Let T(t) ~ ~ and
, a symmetric positive definite matrix.
"
8 a.s. (Le.) 8 t is strongly consistent estimate of
/J.
Proof.
By the hypothesis of the Theorem the denominator of the second term in
(5.1) ~
1:
-+0 a.s..
a.s. and by the martingale strong law of large numbers the numerator
Hence the result follows.
Asymptotic NOrmality.
In this section we prove the asymptotic normality of
the recursive estimate using a martingale central limit theorem.
10
The recursive estimate of B may be written as
t
et
- B + [
t
where Mt -
I
•
o
k~ 1
I
k~ 1
+ < M >t ] -1 [
It
a o dB' and < M > _
s s t
(B 0 - B) + M ]
t
(5.2)
a O b a oT dA
s
s s s
o
Theorem 2.1: - Let M be a locally square integrable martingale defined above.
Assume that there exists a function
1
lim T- (t) < M >t -
(i)
I
T:[O,~) ~
[O,~)
with T(t)
~ ~
s.t.
in probability,
t~
I
€
IR
(it)
pxp
is a symmetric positive definite nonrandom matrix,
lim T- 1 (t)
Io
t
asO
a.s.
Then,
-1/2
<
(a)
M
> t
M
-
N(O,IJ
-
N(O,I)
L
t
and
(b)
I
"
-1/2 (B
t
-
B)
L
Proof:
In the one dimensional case when M has continuous trajectories as in
t
the case of diffusion process model, (a) is a restatement of Kunita and
Watanabe (1967), and when M has purely discontinous trajectories with unit
t
jumps this corresponds to the Metivier's «1982), p. 200) result. In general
)
(a) follows from Linkov (1982).
Applying (i), (ii) & (a) in (5.2) gives (b).
11
Example 5.4: - Now we consider a periodic intensity Ito-Markov model of the
form
dX
t - (3
3+2sin t)
[ :~ ]
dt + dW
+ (5+2sin
t
t)1/2
dM
d
t
d
Hence Ht - It dW s + It (5+2sin s) 1/2 dM
t
0
0
,~
and
3
3+2sin s
]
ds
3+2sin s] 6+2sin s
[3
6+2sin s
9+6sin s
6+2sin s
9+6sin s
6+2sin s
9+l2sin s + 4sin
6+2sin s
9
2
ds
s ]
Using the fact that
lim !t It _:-d....;.x-:---_
t-+<>o
a+bsin x
1
for all
a>b~O.
o
We have
9
as t-+<>o,
1 < M>
t
t
3 -
/32
-
3 _
9
/32
a sYmmetric positive definite matrix.
satisfied.
9
J3i
9
J3i
Hence assumption (i) of Theorem 5.3 is
To establish the assumption (ii) of Theorem 5.3 it is sufficient
to remark that the periodic functions in the integrand of < M >t are bounded.
Hence the normality and strong consistency of the estimate follows.
12
REFERENCES
~.
Aase K.K. (1985) R&D Projects analyzed by semimartinga1e methods.
Prob. 22, 288-299.
J.
Aase K.K. (1983) Recursive estimation in nonlinear time series models of
autoregressive type. J. R. Statist. Soc.~. 45, 228-237.
•
Aase K.K. (1982) Stochastic continuous-time model reference adaptive
system with decreasing gain. Adv. in Prob. 14 , 763-788.
Aase K.K. (1981) Model reference adaptive systems applied to regression
theory. Statist. Neer1andica 1, 123-155.
Christopeit N. (1986) Quasi-least-squares estimation in semimartingale
regression models. Stochastics 12, 255-278.
Elliot R.J. (1982) Stochastic Calculus and Applications. Springer-Verlag,
New York.
Godambe V.P. (1960) An optimum property of regular maximum likelihood
equation. Ann. Math. Statist. 31, 1208-11.
Godambe V. P. and Thompson M.E. (1985) Logic of least square revisited.
Preprint.
Godambe V. P. (1985) The foundations of finite sample estimation in
stochastic processes. Biometrika 12, 419-428.
Habib M.K. and Thavaneswaran A. (1986) Optimal estimation for
semimartingales neuronal models. Preprint.
Hutton J.E. and Nelson P.I. (1986) Quasi-likelihood estimation for
semimartinga1e. Stoch. Proc. ~., 22 , 245-257.
Kunita H. ane Watanabe S. (1967) On square integrable martingales.
Nagoya Math. J. 30, 209-245.
Lindsay B. (1985) Using emperica1 partially Bayes inference for increased
efficiency. Ann. Statist. 13, 914-931.
Linkov Yu.N. (1982) Asymptotic properties of statistica estimators and
tests for Markov processes. Theor. Prob. and Math Statist. 25, 83-89.
Liptser R.S. and Shiryaryev (1980) A strong law of large numbers for
local martingales. Stochastics 1, 217-228.
Liptser R.S. and Shiryayev A.N. (1978) Statistics of Random Processes. 2.
Springer Verlag, New York.
13
Metivier M. (1982) Semimartinga1es.
Walter de Cruyter, New York.
Spreij P. (1986) Recursive parameter estimation for counting processes
with linear intensity. Stochastics, 18, 277-312.
Thavaneswaran A. and Thompson M.E. (1986) Optimal estimation for
semimartinga1es. J. ARRl. Prob, 23, 409-417.
Thavaneswaran A. (1986) Model reference adaptive system estimates for
counting processes. Statistica Neer1andica, 40,. 65-72.
•
(
l
14