INT. J. CONTROL,
2003, VOL. 76, NO. 12, 1159–1170
Local linearization filters for non-linear continuous-discrete state space models with multiplicative
noise
J. C. JIMENEZ{* and T. OZAKI{
In this paper, the local linearization method for the approximate computation of the prediction and filtering estimates of
continuous-discrete state space models is extended to the general case of non-linear non-autonomous models with
multiplicative noise. The approximate prediction and filter estimates are obtained by applying the optimal linear filter
to the piecewise linear state space model that emerges from a local linearization of both the non-linear state equation and
the non-linear measurement equation. In addition, the solutions of the differential equations that describe the evolution
of the first two conditional moments between observations are obtained, and an algorithm for their numerical computation is also given. The performance of the LL filters is illustrated by mean of numerical experiments.
1.
Introduction
The estimation of the state of a continuous stochastic dynamical system from noisy discrete observations
taken on the state is of central importance to solve
diverse scientific and technological problems. The
major contribution to the solution of this estimation
problem is due to Kalman (1960, 1963) and Kalman
and Bucy (1961), who provided a sequential and computational efficient solution to the linear filtering and
prediction problems. However, the optimal non-linear
estimation is still an active area of research. In particular, the basic need of obtaining a good approximation to
the exact solution of the non-linear filtering and prediction problems remains and the subject continues to
receive considerable attention.
In the last 40 years, a large variety of approximate
non-linear filters have been considered (see Schwartz
and Stear 1968, Lainiotis 1974, Liang 1983, Daum
1986, and references therein), e.g. the extended
Kalman, the iterated extended Kalman, the Gaussian,
the modified Gaussian, etc. It is well known that, even
though these classical recursive filters are properly
defined, they are computational unstable in the case of
diverse types of non-linear problems. This unstability
often becomes visible as a numerical explosion in the
computation of the prediction or filtering estimates at
an instant of time (Ozaki 1993). Recently, the novel
projection and the particle filter methods have been
extended from continuous to continuous-discrete state
Received 1 February 2002. Revised and accepted 1 April
2003.
* Author for correspondence. e-mail: juan_carlos_js@
hotmail.com
{ Instituto de Cibernetica, Matematica y Fisica, Calle 15,
No. 551 entre C y D, Vedado, Habana 10400, Ciudad de La
Habana, Cuba.
{ Department of Prediction and Control, The Institute of
Statistical Mathematics, 4-6-7 Minami-Azabu, Minato-ku,
Tokyo 106-8569, Japan.
space models. The first one was only briefly introduced
in Brigo et al. (1999) and further study is still required
for its efficient implementation in practice. The second
one, introduced in Del Moral et al. (2001), has strong
theoretical support, but the filter is computational
expensive in practice because of the intrinsic complexity
of the Monte Carlo method. Moreover, the filter could
also be computational unstable in dependence of the
numerical method used to simulate the state of the
model.
To overcome the computational unstability of the
classical recursive filters, in Ozaki (1993) an alternative
approximate non-linear filter was introduced, not by
approximating continuous-time filter equations as
those cited in Schwartz and Stear (1968), Lainiotis
(1974) and Liang (1983), but by applying the discretetime filtering equations to an approximate discrete-time
model for the original continuous-time dynamical
system. The approximate discrete time model is
obtained from a piece-wise linear discretization of the
non-linear state equation. The approximate non-linear
filter obtained in this way is called a local linearization
(LL) filter. Following this approach, but using a higher
order piece-wise linear discretization, Shoji (1998) introduced a LL filter that includes the original one as a
particular case. Both filters are restricted to autonomous
models with linear observation equations.
For the class of non-autonomous and non-linear
continuous-discrete state space models a LL filter was
proposed by Jimenez (1996). In this case, the LL filter is
obtained approximating the expressions that define
the extended Kalman filter for this type of model,
being the LL discretization of the prediction equation
the essential approximation. This LL filter reduces to
the original one for autonomous models with linear
observation equations.
The high stability of LL filters is the key of their
success in the solution of non-linear filtering problems
for which other filtering algorithms fail (Ozaki 1993,
International Journal of Control ISSN 0020–7179 print/ISSN 1366–5820 online # 2003 Taylor & Francis Ltd
http://www.tandf.co.uk/journals
DOI: 10.1080/0020717031000138214
1160
J. C. Jimenez and T. Ozaki
1994, Jimenez 1996, Ozaki et al. 2000). Besides, it has
been essential for the construction of practical algorithms for the identification (parameter estimation) of
non-linear dynamical systems from noisy discrete observations (Ozaki 1994, Jimenez 1996, Shoji 1998, Ozaki et
al. 2000), which have been successfully applied in the
identification of a neurophysiological model from actual
EEG data (Valdes et al. 1999).
However, these LL filters have been constructed only
for the class of dynamical systems with additive noise. In
this way, there is a large variety of estimation problems
that involve stochastic differential equations (SDEs)
with multiplicative noise for which the LL filters cannot
be used. It includes problems in population dynamics,
protein kinetics, genetic, neuronal activity, investment
finances, option pricing, satellite orbit stability, biological waste treatment, hydrology, etc.
The major difficulty to applying the LL filtering
approach to that class of problems is the absence of
an LL scheme for the numerical integration of SDEs
with multiplicative noise, which is the key to the LL
approach explained above.
In this paper, an alternative heuristic for constructing LL filters is introduced, which allows the derivation
of approximate filters for models with multiplicative
noise. The central idea is obtaining an approximate filter
by applying the optimal linear filter to the piecewise
linear state space model that emerges from a local linearization of both, the non-linear state equation and the
non-linear measurement equation.
The paper is organized as follows. In } 2, the optimal
(minimum variance) linear filter for linear state space
models with multiplicative noise is presented, which is
essential for the construction of the LL filters in } 3. The
solutions of differential equations that describe the evolution of the LL filters estimates between observations
are obtained in } 4, while algorithms for their numerical
computation are given in } 5. In the last section, the
effectiveness of the LL filters is illustrated by means of
numerical experiments.
ztk ¼ Cðtk Þxðtk Þ þ
n
X
Di ðtk Þxðtk Þnitk þ gtk ;
i¼1
for k ¼ 0; 1; . . . ; N
ð2Þ
where xðtÞ 2 Rd is the state vector at the instant of time
t, ztk 2 Rr is the observation vector at the instant of
time tk , w is an m-dimensional Wiener process,
and
fntk : ntk N ð0; ,Þ;
, ¼ diagðð1 ; . . . ; n ÞÞ;
k ¼ 0; . . . ; Ng and fgtk : gtk N ð0; &Þ; k ¼ 0; . . . ; Ng
are sequences of random vectors i.i.d.
^t= ¼ EðxðtÞ=Z Þ and Pt= ¼ EððxðtÞ x
^t= Þ
Let x
^t= Þ> =Z Þ for all 4 t, where Eð:Þ denotes
ðxðtÞ x
mathematical expectation and Z ¼ fztk : tk 4 g:
Suppose that EðwðtÞw> ðtÞÞ ¼ I, Eðnitk gtk Þ ¼ hi ðtk Þ,
^0=0 Þ
^0=0 ¼ Eðxðt0 Þ=Zt0 Þ < 1 and P0=0 ¼ Eððxðt0 Þ x
x
^0=0 Þ> =Zt0 Þ < 1.
ðxðt0 Þ x
Theorem 1: (Jimenez and Ozaki, 2002 a): The optimal
(minimum variance) linear filter for the linear model
(1)–(2) consists of equations of evolution for the condi^t=t and covariance matrix Pt=t . Between
tional mean x
observations, these satisfy the ordinary differential
equation
d^
xt=t ¼ ðAðtÞ^
xt=t þ aðtÞÞdt
ð3Þ
dPt=t ¼ fAðtÞPt=t þ Pt=t A> ðtÞ
þ
m
X
>
^t=t x
^>
Bi ðtÞðPt=t þ x
t=t ÞBi ðtÞ
i¼1
þ
m
X
>
ðBi ðtÞ^
xt=t b>
x>
i ðtÞ þ bi ðtÞ^
t=t Bi ðtÞ
i¼1
þ bi ðtÞb>
i ðtÞÞgdt
ð4Þ
for all t 2 ½tk ; tkþ1 Þ. At an observation at tk , they satisfy
the difference equation
^tkþ1 =tkþ1 ¼ x
^tkþ1 =tk þ Ktkþ1 ðztkþ1 Cðtkþ1 Þ^
x
xtkþ1 =tk Þ
Ptkþ1 =tkþ1 ¼ Ptkþ1 =tk Ktkþ1 Cðtkþ1 ÞPtkþ1 =tk
ð5Þ
ð6Þ
where
2.
Ktkþ1 ¼ Ptkþ1 =tk C> ðtkþ1 ÞfCðtkþ1 ÞPtkþ1 =tk C> ðtkþ1 Þ þ &
Optimal linear filter for linear state space models
Let the linear state space model defined by the continuous state equation
dxðtÞ ¼ ðAðtÞxðtÞ þ aðtÞÞdt þ
m
X
þ
n
X
>
^tkþ1 =tk x
^>
i Di ðtkþ1 ÞðPtkþ1 =tk þ x
tkþ1 =tk ÞDi ðtkþ1 Þ
i¼1
ðBi ðtÞxðtÞ
þ
i¼1
n
X
Di ðtkþ1 Þ^
xtkþ1 =tk ðhi ðtkþ1 ÞÞ>
i¼1
þ bi ðtÞÞdwi ðtÞ,
ð1Þ
þ
and the discrete observation equation
n
X
i¼1
1
>
hi ðtkþ1 Þ^
x>
tkþ1 =tk Di ðtkþ1 Þg
ð7Þ
1161
Local linearization filters
^t= and Pt= are accomis the filter gain. The predictions x
plished, respectively, via expressions (3) and (4) with
^= and P= for < t:
initial conditions x
Note that, for linear models (1)–(2) with aðtÞ 0 and
Bi ðtÞ Dj ðtÞ 0 (for all i ¼ 1; . . . ; m and j ¼ 1; . . . ; n),
the filter (3)–(6) reduces to the Kalman–Bucy filter for
continuous-discrete models with additive noise.
^0=0 : Here,
for all t 2 ½tk; tkþ1 Þ, starting at y ðt0 Þ ¼ ^y0=0 ¼ x
the symbol indicates the type of approximation used:
¼ 1 for the Taylor approximation and ¼ 1:5 for the
Ito–Taylor. The remaining notations are ^
yt= ¼
Eðy ðtÞ=Z Þ
Aðs; uÞ ¼ Jf ðs; uÞ
Bi ðs; uÞ ¼ Jgi ðs; uÞ
3.
The LL filters for non-linear state space models
Let the state space model defined by the continuous
state equation
dxðtÞ ¼ fðt; xðtÞÞdt þ
m
X
i
gi ðt; xðtÞÞdw ðtÞ;
for t t0
i¼1
ð8Þ
and the discrete observation equation
ztk ¼ hðtk ; xðtk ÞÞ þ
n
X
pi ðtk ; xðtk ÞÞnitk
þ e tk ;
b1i ðt; s; uÞ ¼ gi ðs; uÞ Jgi ðs; uÞu þ
ð9Þ
@fðs; uÞ
ðt sÞ þ Jf ðs; uÞÞðxðtÞ uÞ
@s
þ
þ
ð10Þ
þ Jf ðs;uÞðxðtÞ uÞ
d
where ðs; uÞ 2 R R ; Jf ðs; uÞ is the Jacobian of f evaluated at the point ðs; uÞ; and Gðs; uÞ is the d m matrix
with entries ½Gðs; uÞi; j ¼ gij ðs; uÞ. See Jimenez et al.
(1999) for details.
Using these approximations for f and gi , the solution
of the non-linear SDE (8) can be approximated by the
solution of the piecewise linear SDE
þ
m
X
d
1X
@ 2 gi ðs; uÞ
½Gðs; uÞG> ðs; uÞi;l
ðt sÞ
2 i;l¼1
@ui @ul
hðs; vÞ hðs; uÞ þ Jh ðs; uÞðv uÞ
!
d
2
@fðs; uÞ 1 X
k;l @ fðs; uÞ
>
þ
½Gðs;uÞG ðs; uÞ
ðt sÞ
þ
@s
2 k;l¼1
@uk @ul
dy ðtÞ ¼
d
1X
@ 2 fðs; uÞ
½Gðs; uÞG> ðs; uÞi;l
ðt sÞ
2 i;l¼1
@ui @ul
for ¼ 1:5:
Likewise, the functions h and pi can be also linearly
approximated. Specifically, let us consider
fðt;xðtÞÞ fðs; uÞ
ðAðtk ; ^
ytk =tk Þy ðtÞ
@gi ðs; uÞ
ðt sÞ
@s
1
b1:5
i ðt; s; uÞ ¼ bi ðt; s; uÞ
while in the second one
@fðs; uÞ
ðt sÞ
@s
a1:5 ðt; s; uÞ ¼ a1 ðt; s; uÞ
where f, gi , h and pi are non-linear functions, fntk : ntk N ð0; ,Þ; , ¼ diagðð1 ; . . . ; n ÞÞ; k ¼ 0; . . . ; Ng and
fetk : etk N ð0; DÞ; k ¼ 0; . . . ; Ng are sequences of random vectors i.i.d, and nitk and etk are uncorrelated for all
i and k.
The functions f and gi in (8) can be linearly approximated in two ways: by means of a truncated Taylor
expansion or by a truncated Ito–Taylor expansion. In
the first way, for example
fðt; xðtÞÞ fðs; uÞ þ
a1 ðt; s; uÞ ¼ fðs; uÞ Jf ðs; uÞu þ
for ¼ 1, and by
i¼1
for k ¼ 0; 1; . . . ; N
and a , bi are defined, respectively, by
þa
ðt; tk ; ^
ytk =tk ÞÞdt
pi ðs; vÞ pi ðs; uÞ þ Jpi ðs; uÞðv uÞ
where ðs; uÞ 2 R Rd and Jh ðs; uÞ and Jpi ðs; uÞ are, respectively, the derivative of h and pi respect to v evaluated at the point ðs; uÞ. Using this approximation, the
non-linear observation equation (9) could be rewritten
as
ztkþ1 ¼ hðtkþ1 ; ^ytkþ1 =tk Þ þ Jh ðtkþ1 ; ^ytkþ1 =tk Þðytkþ1 ^
ytkþ1 =tk Þ
þ
n
X
ðpi ðtkþ1 ; ^ytkþ1 =tk Þ
i¼1
þ Jpi ðtkþ1 ; ^ytkþ1 =tk Þðytkþ1 ^ytkþ1 =tk ÞÞnitkþ1 þ etkþ1
ðBi ðtk ; ^
ytk =tk Þy ðtÞ þ bi ðt; tk ; ^
ytk =tk ÞÞdwi ðtÞ
þ rðtkþ1 ; xtkþ1 ; ^ytkþ1 =tk ; nitkþ1 Þ
i¼1
ð11Þ
for all k, where ytkþ1 y ðtkþ1 Þ xtkþ1 xðtkþ1 Þ and
ð12Þ
1162
J. C. Jimenez and T. Ozaki
rðtkþ1 ; xtkþ1 ; ^
ytkþ1 =tk ; nitkþ1 Þ ¼ hðtkþ1 ; xtkþ1 Þ
ytkþ1 =tk Þ
hðtkþ1 ; ^
Definition 1: The local linearization filter for the nonlinear state space model (8)–(9) is defined, between
observations, by
ytkþ1 =tk Þ
Jh ðtkþ1 ; ^
d^yt=t ¼ ðAðtk ; ^ytk =tk Þ^yt=t þ a ðt; tk ; ^ytk =tk ÞÞdt
ðytkþ1
þ
n
X
ðpi ðtkþ1 ; xtkþ1 Þ
^
ytkþ1 =tk Þ
dPt=t ¼ fAðtk ; ^ytk =tk ÞPt=t þ Pt=t A> ðtk ; ^ytk =tk Þ
þ
i¼1
þ
ytkþ1 =tk Þ
Jpi ðtkþ1 ; ^
ztkþ1 ¼ hðtkþ1 ; ^
ytkþ1 =tk Þ þ Jh ðtkþ1 ; ^
ytkþ1 =tk Þðytkþ1 ^ytkþ1 =tk Þ
n
X
i¼1
þ
ð13Þ
is, in a weak sense, equivalent to the expression (12),
where fgtkþ1 : gtkþ1 N ð0; &ðtkþ1 ÞÞg is sequence of random vectors i.i.d with Eðnitk gtk Þ ¼ #i ðtk Þ
&ðtkþ1 Þ ¼ Dþ
n
X
1 i
# ðtkþ1 Þð#i ðtkþ1 ÞÞ>
i¼1 i
# ðtkþ1 Þ ¼
i ðpi ðtkþ1 ; ^
ytkþ1 =tk Þ
þ bi ðt; tk ; ^ytk =tk Þð^yt=t Þ> B>
ytk =tk Þ
i ðtk ; ^
Jpi ðtkþ1 ; ^
ytkþ1 =tk Þ^ytkþ1 =tk Þ
n
X
þ
m
X
bi ðt; tk ; ^ytk =tk Þðbi ðt; tk ; ^ytk =tk ÞÞ> Þgdt
ð17Þ
i¼1
^yt =t ¼ ^yt =t þ Ktkþ1 ðztkþ1 hðtkþ1 ; ^yt =t ÞÞ
kþ1 kþ1
kþ1 k
kþ1 k
ð18Þ
Ptkþ1 =tkþ1 ¼ Ptkþ1 =tk Ktkþ1 Jh ðtkþ1 ; ^ytkþ1 =tk ÞPtkþ1 =tk
ð19Þ
^t= and Pt=
at each observation at tk . The predictions y
are accomplished, respectively, via expressions (16) and
(17) with initial conditions ^y= and P= for < t. Here
ytkþ1 =tk Þ þ &ðtkþ1 Þ
fJh ðtkþ1 ;^ytkþ1 =tk ÞPtkþ1 =tk J>
h ðtkþ1 ;^
þ
n
X
i Jpi ðtkþ1 ;^ytkþ1 =tk Þ
i¼1
ðPtkþ1 =tk þ ^ytkþ1 =tk ð^ytkþ1 =tk Þ> ÞJ>
ytkþ1 =tk Þ
pi ðtkþ1 ;^
From (13) it is obtained that
utkþ1 ¼ Jh ðtkþ1 ; ^ytkþ1 =tk Þytkþ1 þ
Bi ðtk ; ^ytk =tk Þ^yt=t ðbi ðt; tk ; ^ytk =tk ÞÞ>
ytkþ1 =tk Þ
Ktkþ1 ¼ Ptkþ1 =tk J>
h ðtkþ1 ;^
and
i
m
X
for all t 2 ½tk ; tkþ1 Þ, and by
Jpi ðtkþ1 ; ^
ytkþ1 =tk Þytkþ1 nitkþ1 þ gtkþ1
rðtkþ1 ; xtkþ1 ; ^
ytkþ1 =tk ; nitkþ1 Þ
Bi ðtk ; ^ytk =tk ÞðPt=t þ ^yt=t ð^yt=t Þ> ÞB>
ytk =tk Þ
i ðtk ; ^
i¼1
^
ytkþ1 =tk ÞÞnitkþ1
Moreover, it is not difficult to see that the expression
þ
m
X
i¼1
pi ðtkþ1 ; ^
ytkþ1 =tk Þ
ðytkþ1
ð16Þ
Jpi ðtkþ1 ; ^
ytkþ1 =tk Þytkþ1 nitkþ1
þ
ð14Þ
Jpi ðtkþ1 ;^ytkþ1 =tk Þ^ytkþ1 =tk ð#i ðtkþ1 ÞÞ>
i¼1
i¼1
þ gtkþ1
n
X
þ
n
X
#i ðtkþ1 Þð^ytkþ1 =tk Þ> J>
ytkþ1 =tk Þg1
pi ðtkþ1 ;^
ð20Þ
i¼1
where
utkþ1 ¼ ztkþ1 hðtkþ1 ; ^
ytkþ1 =tk Þ
þ
and Pt= ¼ Eððy ðtÞ ^yt= Þðy ðtÞ ^yt= Þ> =Z Þ for t.
Jh ðtkþ1 ; ^
ytkþ1 =tk Þ^
ytkþ1 =tk
rðtkþ1 ; xtkþ1 ; ^
ytkþ1 =tk ; nitkþ1 Þ
ð15Þ
Then local linearization filter is obtained in each
interval ½tk; tkþ1 in two steps: (a) by the application
of Theorem 1 to the linear state space model defined
by the state equation (11) and the observation
equation (14), and (b) the substitution of (15) with
rðtkþ1 ; xtkþ1 ; ^
ytkþ1 =tk ; nitkþ1 Þ ¼ 0 in the expression for the
filter estimated ^
ytkþ1 =tkþ1 obtained in the previous step.
Therefore, we have the following definition.
For linear state space models with multiplicative
noise, the LL filter (16)–(19) reduces to the optimal
linear filter (3)–(6). While, for linear models with additive noise and aðtÞ 0 and Bi ðtÞ Dj ðtÞ 0 (for all
i ¼ 1; . . . ; m and j ¼ 1; . . . ; n), the LL filter (16)–(19)
reduces to the Kalman–Bucy filter.
Besides, for non-linear state space models with additive noise this filter with ¼ 1 and ¼ 1:5 reduces,
respectively, to the LL filter introduced by Ozaki
(1993) and by Shoji (1998) for autonomous models
with linear observation equations and functions f with
non-singular Jacobian. With ¼ 1, the LL filter (16)–
1163
Local linearization filters
2
(19) reduces to the LL filter introduced by Jimenez
(1996) for non-autonomous and non-linear models.
4.
Solution of the equations for the first two
conditional moments between observations
Theorem 2: Let A and Bi be constant matrices. Let
aðtÞ ¼ a0 þ a1 t and bi ðtÞ ¼ bi;0 þ bi;1 t, where a0 , a1 , bi;0
and bi;1 are constant vectors. The general solution of the
differential equations
APt=tk þ Pt=tk A> þ
0
m
X
m
X
bi ðtÞb>
i ðtÞ
0
ðs
0
0
H1 ðsÞ
G2 ðsÞ
F3 ðsÞ
0
A4
K1 ðsÞ
ð25Þ
g n4
3
7
H2 ðsÞ 7
7
7 ¼ expðsCÞ
7
G3 ðsÞ 7
5
ð26Þ
F4 ðsÞ
expðAj ðs uÞÞCj expðAjþ2 uÞdu
0
ðs ðu
0
ð22Þ
þ
expðAj ðs uÞÞBj expðAjþ1 ðu rÞÞBjþ1
ðs ðu
expðA1 ðs uÞÞ½C1 expðA3 ðu rÞÞB3
0 0
expðAðt tk sÞÞðA^
xtk =tk þ aðs þ tk ÞÞds
þ B1 expðA2 ðu rÞÞC2 expðA4 rÞdr du
ðs ðu ðr
þ
expðA1 ðs uÞÞB1 expðA2 ðu rÞÞB2
0 0
vecðPt=tk Þ ¼ expðAðt tk ÞÞ
8 ð ttk
X
j¼1
!
expðAsÞBj expðAj sÞCj ej ðsÞds
0
ð24Þ
P
for all t tk , where A ¼ A I þ m
i¼1 Bi Bi , and Aj ,
Bj , Cj and ej are the matrices defined in table 1.
The following lemma will be useful to demonstrate
the above theorem.
Lemma 1 (Van Loan 1978): Let A1 , A2 , A3 and A4 be
square matrices, n1 , n2 , n3 and n4 be positive integers,
and set m to be their sum. If the m m block triangular
matrix C is defined by
0
expðA3 ðr wÞÞB3 expðA4 wÞdw dr du
ð23Þ
vecðPtk =tk Þ þ
A3
expðAjþ2 rÞdrdu
ðs
K1 ðsÞ expðA1 ðs uÞÞD1 expðA4 uÞdu
is given by
0
0
0
i¼1
^tk =tk þ
^t=tk ¼ x
x
B2
0 0
i¼1
ð ttk
A2
3
g n1
7
C2 7
7 g n2
7
7
B3 7
5 g n3
D1
Fj ðsÞ expðAj sÞ; for j ¼ 1; 2; 3; 4
ðs
Gj ðsÞ expðAj ðs uÞÞBj expðAjþ1 uÞdu, for j ¼ 1; 2; 3
þ
>
^t=tk x
^>
Bi ðPt=tk þ x
t=tk ÞBi
)
m
X
>
>
>
^t=tk bi ðtÞ þ bi ðtÞ^
þ
ðBi x
xt=tk Bi Þ dt
C1
where
ð21Þ
i¼1
þ
then for s 0
2
F1 ðsÞ G1 ðsÞ
6
6 0
F2 ðsÞ
6
6
6
6 0
0
4
Hj ðsÞ (
dPt=tk ¼
6
6 0
6
C¼6
6
6 0
4
B1
0
This section deals with the problem of solving the
differential equations (16) and (17) that describe the
^t=t and Pt=t , respectively.
evolution of the predictions x
k
k
Here, the symbols vec, and will denote the
vectorization operator, the Kronecker sum and
product, respectively (see Magnus and Neudecker
1988, for definitions).
d^
xt=tk ¼ ðA^
xt=tk þ aðtÞÞdt
A1
Proof of Theorem 2: The general solution of the
linear equation (21) is very well known (Bellman
1970). It is
ðt
^
^
xt=tk ¼ expðAðttk ÞÞ xtk =tk þ expðAðstk ÞÞ aðsÞds
tk
ð ttk
^tk =tk þ
¼ expðAðttk ÞÞ x
expðAsÞaðsþtk Þ ds
0
The expression (23) is obtained by replacing aðs þ tk Þ
in the above expression by aðs þ tk Þ þ A^
xtk =tk A^
xtk =tk ,
and by using the identity
ð ttk
expðAsÞ dsA ¼ ðexpðAðt tk ÞÞ IÞ;
0
where I is the identity matrix.
1164
J. C. Jimenez and T. Ozaki
j
Aj
Bj
Cj
ej ðsÞ
1
C Idþ2
m
X
ðBi Bi ÞðL> L> Þ
vecðRR> )
1
vecðR^
x>
tk =tk Þ
1
vecð^
xtk =tk R> Þ
1
vecðIdþ2 Þ
1
1
1
vecðIdþ2 Þ
s
1
s
1
s2
i¼1
m
X
ðBi Bi ÞðId L> Þ
Id C
2
i¼1
m
X
ðBi Bi ÞðL> Id Þ
C Id
3
i¼1
m
X
ðbi ðtk Þ Bi þ Bi bi ðtk ÞÞðL> R> Þ
C Idþ2
4
i¼1
5
0
6
C Idþ2
m
X
^>
ðBi Bi Þ vecð^
xtk =tk x
xtk =tk Þ
tk =tk Þ þ ðbi ðtk Þ Bi þ Bi bi ðtk ÞÞ vecð^
i¼1
þvecðbi ðtk Þb>
i ðtk ÞÞ
m
X
ðbi;1 Bi þ Bi bi;1 ÞðL> R> Þ
i¼1
7
0
vec
m
X
!
bi ðtk Þb>
i;1
þ
bi;1 b>
i ðtk Þ
þ
i¼1
8
m
X
ðbi;1 Bi þ Bi bi;1 Þvecð^
xtk =tk Þ
i¼1
m
X
ðbi;1 b>
vec
i;1 Þ
0
!
i¼1
Table 1.
Values of
2
A
C¼40
0
Aj , Bj , Cj and ej of the expression (24). Here
3
a1 A^
xtk =tk þ aðtk Þ
5 2 Rðdþ2Þðdþ2Þ
0
1
0
0
L> ¼ ½Id 0d2 2 Rdðdþ2Þ , R> ¼ ½01ðdþ1Þ 1 2 R1ðdþ2Þ and Im is the m m identity matrix.
Applying the operator vec to the equation (22) it is
obtained the linear equation
Rewriting the expression (23) as
ð ttk
^tk =tk þ
^t=tk ¼ x
expðAðt tk sÞÞds
x
dvecðPt=tk Þ ¼ ðBðtÞ þ A vecðPt=tk ÞÞdt
0
whose solution is given by the expression
ðA^
xtk =tk þ aðtk ÞÞ
ð ttk ð s
expðAðt tk sÞÞdu ds a1
þ
ð ttk
vecðPt=tk Þ ¼ expðAðt tk ÞÞ vecðPtk =tk Þ þ
expðAsÞBðs þ tk Þds
0
ð27Þ
0
0
and using Lemma 1 it is obtained that
where
A¼AIþ
m
X
Bi Bi
i¼1
^tk =tk þ L> expððt tk ÞCÞR
^t=tk ¼ x
x
where
2
m
m
X
X
^>
BðtÞ ¼
ðBi Bi Þvecð^
xt=tk x
vecðbi ðtÞb>
i ðtÞÞ
t=tk Þ þ
i¼1
þ
m
X
i¼1
A
6
6
C¼60
4
0
i¼1
ðbi ðtÞ Bi þ Bi bi ðtÞÞvecð^
xt=tk Þ
ð28Þ
>
a1
A^
xtk =tk þ aðtk Þ
0
1
0
L ¼ ½Id 0d2 2 R
3
7
7
ðdþ2Þðdþ2Þ
72R
5
0
dðdþ2Þ
and R> ¼ 01ðdþ1Þ 1 2 R1ðdþ2Þ .
1165
Local linearization filters
Substituting (28) in (27), and after some algebraic
manipulations the expression (24) is obtained.
&
5.
Numerical computation of the prediction estimates
In the previous section, expressions for the predictions were obtained. However, in general, these expressions are difficult to implement in practice by means of a
computer program. In this section, simple algebraic
expressions are introduced as a computational feasible
alternative to (23) and (24).
2
F 5 ðtÞ
G5 ðtÞ
H5 ðtÞ
0
F 50 ðtÞ
G50 ðtÞ
0
0
F 500 ðtÞ
0
0
0
6
6
6
6
6
6
4
3
7
H50 ðtÞ 7
7
7
7
G500 ðtÞ 7
5
F5FðtÞ
2
A
6
60
6
¼ expððt tk ÞJ 5 Þ; with J 5 ¼ 6
6
60
4
The expression (23) and (24) are equiva-
Theorem 3:
lent to
K5 ðtÞ
0
^t=tk ¼ x
^tk =tk þ hðtÞ
x
ð29Þ
and
vecðPt=tk Þ ¼ F 1 ðtÞ vecðPtk =tk Þ þ
4
X
B8
B7
0
2
0
0
0
0
B5
3
7
0 7
7
7
7
1 7
5
0
The matrices A, Aj , Bj and Cj are defined as in Theorem 2.
Proof: The expression (29) is straightly derived by rewriting (23) as
ð ttk
^t=tk ¼ x
^tk =tk þ
x
expðAðt tk sÞÞds
Gj ðtÞ Cj þ K5 ðtÞ
j¼1
0
þ H6 ðtÞC6
ð30Þ
respectively. Here, h is defined by the matrix identity
2
3
FðtÞ g1 ðtÞ hðtÞ
6
7
6
7
1
g2 ðtÞ 7 ¼ expððt tk ÞCÞ
6 0
4
5
0
0
1
with
2
A
6
6
C¼60
4
0
a1
A^
xtk =tk þ aðtk Þ
0
1
0
0
0
and using Lemma 1, with s ¼ t tk , A1 ¼ A, B1 ¼ a1 ,
B2 ¼ 1, C1 ¼ A^
xtk =tk þ aðtk Þ and A2 ¼ A3 ¼ 0 in (25).
By setting s ¼ t tk , A1 ¼ A, B1 ¼ Bi , and A2 ¼ Ai
in Lemma 1 it is obtained that
ð ttk
expðAðt tk sÞÞBi expðAi sÞCi ds ¼ Gi ðtÞ Ci
3
7
7
ðdþ2Þðdþ2Þ
72R
5
0
for all i ¼ 1; . . . ; 4. Obviously, expððt tk ÞAÞ ¼ F 1 ðtÞ.
Likewise, using the matrix identities
ð ttk
expðAðt tk sÞÞB6 expðA6 sÞ s ds
0
¼ expððt tk ÞJ j Þ;
"
with J j ¼
A
Bj
¼
#
Aj
H6 by
2
3
F 6 ðtÞ G6 ðtÞ H6 ðtÞ
6
7
6
7
F 60 ðtÞ G60 ðtÞ 7 ¼ expððt tk ÞJ 6 Þ;
6 0
4
5
0
0
F 600 ðtÞ
2
A B6
6
6
with J 6 ¼ 6 0 A6
4
0 0
and K5 by
ð ttk ð s
0
for j ¼ 1; . . . ; 4
0
0
0
F 1 , Gj by
"
#
F j ðtÞ Gj ðtÞ
F j0 ðtÞ
ðA^
xtk =tk þ aðtk ÞÞ
ð ttk ð s
expðAðt tk sÞÞdu ds a1
þ
ð ttk
expðAðt tk sÞÞB6 expðA6 sÞ du ds
0
expðAðt tk sÞÞs ds
0
¼
ð ttk ð s
0
ð ttk
expðAðt tk sÞÞdu ds
0
expðAðt tk sÞÞs2 ds
0
0
3
7
7
I 7
5
A6
¼2
ð ttk ð s ð u
0
expðAðt tk sÞÞ dr du ds
0 0
and Lemma 1, the last two terms of expression (30) are
obtained.
&
In this way, the numerical computation of the con^t=t and Pt=t is reduced to use a conditional moments x
k
k
1166
J. C. Jimenez and T. Ozaki
venient algorithm to compute matrix exponentials, e.g.
those based on rational Padé approximations (Golub
and Van Loan 1996), the Schur decomposition (Golub
and Van Loan 1996) or Krylov subspace methods
(Hochbruck and Lubich 1997). For a recent review see
Sidje (1998). The selection of one of them will mainly
depend of the size and structure of the matrices C and
J i . In many cases, it is enough to use the algorithm
developed by Van Loan (1978), which takes advantage
of the special structure of these matrices. However, for
state space models that involve more that four state
equation the matrices J i become large. In that case,
the Krylov subspace methods provide the more efficient
and accurate algorithms to compute matrix exponentials. See Jimenez (2002) for more details.
Moreover, in the case of autonomous state space
models simpler expressions than (29)–(30) can be used
to evaluate the solution of (16)–(17). That is:
Theorem 4 (Jimenez and Ozaki, 2002 a): For equations (21)–(22) with aðtÞ ¼ a and bi ðtÞ ¼ bi , the expressions (23) and (24) are equivalent to
^tk =tk þ gðtÞ
^t=tk ¼ x
x
ð31Þ
and
vecðPt=tk Þ ¼ F 1 ðtÞ vecðPtk =tk Þ þ
5
X
Gj ðtÞ Cj
ð32Þ
j¼1
respectively. Here, the vector g is defined by the matrix
identity
"
#
FðtÞ gðtÞ
¼ expððt tk ÞCÞ
0
1
"
#
A A^
xtk =tk þ a
with C ¼
2 Rðdþ1Þðdþ1Þ
0
0
and the matrices F 1 , Gj by
"
#
"
#
A Bj
F j ðtÞ Gj ðtÞ
¼ expððt tk ÞJ j Þ; with J j ¼
0 F j 0 ðtÞ
0 Aj
Pm
and A ¼ A I þ i¼1 Bi Bi . The matrices Aj , Bj , and
Cj are defined in Table 2.{
6.
Numerical experiments
In this section, the performance of the LL filters
introduced above is illustrated by means of some numerical experiments. Specifically, the LL filter estimates
^t =t and Pt =t are compared with the exact first two
x
k k
k k
{The matrices A, A1 , B1 and B5 differ from those that
appear in Jimenez and Ozaki (2002 a) because of some print
mistakes in that paper.
conditional moments of the solution of two non-linear
state space models with multiplicative noise.
^t =t and Pt =t are computed by the
For ¼ 1, x
k k
k k
^t =t
equations (18) and (19); where the predictions x
kþ1 k
and Ptkþ1 =tk , defined by (16) and (17), are computed
by means of the expressions (31) and (32).
Example 1: Let the non-linear state space model with
multiplicative noise
pffiffiffiffiffi
dx1 ¼ ð þ x1 Þdt þ x1 dw1
ð33Þ
dx2 ¼ x22 dt þ x21 dw2
ð34Þ
ztk ¼ x2 ðtk Þ 0:001x32 ðtk Þ þ ðx2 ðtk Þ
0:01x22 ðtk ÞÞtk þ etk
ð35Þ
where ¼ 0:0025, ¼ 0:0175, ¼ 0:1, ¼ 0:01,
¼ 0:05, ftk : tk N ð0; 2 Þg and fetk : etk N ð0; 2 Þg
are sequences of random vectors i.i.d; and Eðtk etk Þ ¼ 0
for all k:
Equation (33) has been well studied in the framework of financial modelling by Cox et al. (1985). The
exact first two conditional moments of the solution of
this equation are also very well known (Bibby and
Sorensen 1995).
Figure 1 shows a realization of the solution of the
state equations (33)–(34) computed by the explicit
Euler–Maruyama scheme (Kloeden and Platen 1995)
at the instant of times j ¼ j, with ¼ 0:001 and
j ¼ 0; . . . ; 4000.
Figure 2 shows a realization of the observed equation (35) with 2 ¼ 0 and 2 ¼ 1. Figure 3 (top) and
(bottom) present, respectively, the first and second
exact conditional moments of the variable x1 together
^t =t
^t =t and Pt =t for that variable. x
with the estimates x
k k
k k
k k
and Ptk =tk were estimated from the set of observations
fztK gk¼1;...;N , where tk ¼ kh, h ¼ 0:01 and N ¼ 400.
In figure 4 a realization of the observed equation (35)
with 2 ¼ 0:001 and 2 ¼ 0 is displayed. Figure 5 (top)
and (bottom) shows, respectively, the first and second
exact conditional moments of the variable x1 together
^t =t and Pt =t for that variable.
with the estimates x
k k
k k
Figure 6 presents a realization of the observed equation (35) with 2 ¼ 0:001 and 2 ¼ 1. Figure 7 (top) and
(bottom) show, respectively, the first and second exact
conditional moments of the variable x1 together with the
^t =t and Pt =t for that variable.
estimated x
k k
k k
Note that, in all cases, there is no significant difference between the exact and the estimated values of the
first two conditional moments.
Example 2: Consider the non-linear state space model
with multiplicative noise
1167
Local linearization filters
j
Aj
1 C Idþ1
Bj
Cj
m
X
ðBi Bi ÞðL> L> Þ
vecðRR> )
i¼1
2
m
X
ðBi Bi ÞðId L> Þ
Id C
vecðR^
x>
tk =tk Þ
i¼1
3
m
X
ðBi Bi ÞðL> Id Þ
C Id
vecð^
xtk =tk R> Þ
i¼1
m
X
ðbi Bi þ Bi bi ÞðL> R> Þ
4 C Idþ1
vecðIdþ1 Þ
i¼1
5
0
m
X
^>
ðBi Bi Þ vecð^
xtk =tk x
xtk =tk Þ þ vecðbi b>
i Þ
tk =tk Þ þ ðbi Bi þ Bi bi Þ vecð^
1
i¼1
Table 2.
Values of Aj , Bj , Cj and of the expression (32). Here
A A^
xtk =tk þ aðtk Þ
2 Rðdþ1Þðdþ1Þ
C¼
0
0
L> ¼ ½Id 0d1 2 Rdðdþ1Þ , R> ¼ ½01d 1 2 R1ðdþ1Þ and Im is the m m identity matrix.
Figure 1. Realization of the solution of the state equations
(33)–( 34) computed by the explicit Euler–Maruyama
scheme. Top: x1 ; Bottom: x2 .
Figure 3. First (top) and second (bottom) exact conditional
moments of the variable x1 together their estimated
^t =t (top) and Pt =t (bottom), respectively, given the
x
k k
k k
observation of figure 2.
Figure 2.
Realization of the observed equation (35) with
2 ¼ 0 and 2 ¼ 1.
dx1 ¼ dt þ expðx2 =2Þdw1
ð36Þ
dx2 ¼ ð þ x2 Þdt þ dw2
ð37Þ
ztk ¼ x1 ðtk Þ
ð38Þ
Figure 4.
Realization of the observed equation (35) with
2 ¼ 0:001 and 2 ¼ 0.
1168
J. C. Jimenez and T. Ozaki
Figure 5. First (top) and second (bottom) exact conditional
moments of the variable x1 together their estimated
^t =t (top) and Pt =t (bottom), respectively, given the
x
k k
k k
observation of figure 4.
Figure 6.
Figure 8. Realization of the solution of the state equations
(36)–(37) computed by the explicit Euler–Maruyama
scheme. Top: x1 ; Bottom: x2 .
Realization of the observed equation (35) with
2 ¼ 0:001 and 2 ¼ 1:
Figure 9. First (top) and second (bottom) exact conditional
^t =t
moment of the variable x2 together estimated x
k k
(top) and Ptk =tk (bottom) for that variable, given the
observation of the variable x1 .
Figure 7. First (top) and second (bottom) exact conditional
moments of the variable x1 together their estimated
^t =t (top) and Pt =t (bottom), respectively, given the
x
k k
k k
observation of figure 6.
where ¼ 0:0117, ¼ 0:0175, ¼ 0:0211 and
¼ 0:0110.
The model (36)–(37) has also been well studied in the
framework of the option pricing modelling (Scott 1987,
1991, Wiggins 1987, Chasney and Scott 1989, Ozaki and
Iino 2000). In this model, the exact first two conditional
moments of the solution (37) are very easy to compute.
Figure 8 shows a realization of the solution of the
state equations (36)–(37) computed by the explicit
Euler–Maruyama scheme at the instant of times
j ¼ j, with ¼ 0:001 and j ¼ 0; . . . ; 1 460 000.
Figure 9 presents the first and second exact conditional moments of the variable x2 together with the
^tk =tk and
^tk =tk and Pt =t for that variable. x
estimated x
k k
Ptk =tk were estimated from a set of observations
fztK gk¼1;...;N , where tk ¼ kh, h ¼ 0:1 and N ¼ 14 600.
Local linearization filters
Note that this example simulates the estimation of
the non-observed component x2 of the volatility model
(36)–(37) given 10 daily observations of a currencies
exchange rate x1 during 4 years.
7.
Conclusions
In this paper the local linearization method for the
approximate computation of the prediction and filtering
estimates of continuous-discrete state space models
was extended to the general case of non-linear nonautonomous models with multiplicative noise. An effective algorithm for numerical implementation of the prediction and filter estimates was also introduced and the
performance of the method was illustrated by means of
simulations.
At this point, it is opportune to point out that the
approximate filtering method introduced here constitutes the main component of the approximate innovation
method that has been recently proposed for the parameter estimation of discretely observed SDEs with
multiplicative noise (Jimenez and Ozaki 2002 b). That
inference method has allowed an adequate estimation
of the non-observed states and the unknown parameters
of continuous-time stochastic volatility models given a
time series of financial data (Ozaki and Jimenez 2002)
and, in that way, a satisfactory monitoring and control
of currencies in exchange markets (Ozaki et al. 2001).
Recently, the LL filtering method has also been successful applied for the estimation of a non-autonomous
hemodynamical model from non-linear observations
with missing data (Riera et al. 2003).
Acknowledgements
The authors are very grateful to the Ministry of
Education, Science and Culture of Japan and, also, to
the Mizuho Trust and Banking Co. Ltd for the partial
support of this paper.
References
Bellman, R., 1970, Introduction to Matrix Analysis, 2nd edn
(New York: McGraw-Hill).
Bibby, B. M., and Sorensen, M., 1995, Martingale estimation
function for discrete observed diffusion processes. Bernoulli
1, 17–39.
Brigo, D., Hanzon, B., and Le Gland, F., 1999,
Approximate nonlinear filtering by projection on exponential manifolds of densities. Bernoulli, 5, 495–534.
Chasney, M., and Scott, L. O., 1989, Pricing European currency options: a comparison of the modified Black–Scholes
model and a random variance model. Journal of Financial
and Quantitative Analysis, 24, 267–284.
1169
Cox, J. C., Ingersoll, J. E., and Ross, S. A., 1985, A theory
of the term structure of interest rates. Econometrica, 53,
285–408.
Daum, F. E., 1986, Exact finite-dimensional nonlinear filters.
IEEE Transactions on Automatic Control, 31, 616–622.
Del Moral, P., Jacod, J., and Protter, P., 2001, The
Monte-Carlo method for filtering with discrete-time
observations. Probability Theory and Related Fields, 120,
346–368.
Golub, G. H., and Van Loan, C. F., 1996, Matrix
Computations 3rd edn (The Johns Hopkins University
Press).
Hochbruck, M., and Lubich, C., 1997, On Krylov subspace
approximations to the matrix exponential operator. SIAM
Numerical Analysis, 34, 1911–1925.
Jimenez, J. C., 1996, Local linearization method for the
numerical solution of stochastic differential equations and
nonlinear Kalman filters. In Proceeding of the Symposium:
Studies on Data Analysis by Statistical Software, Fuji
Kenshu-Jyo, Japan, Nov. 1995, pp. 291–292.
Jimenez, J. C., 2002, A simple algebraic expression to evaluate
the local linearization schemes for stochastic differential
equations. Applied Mathematics Letters, 15, 775–780.
Jimenez, J. C., and Ozaki, T., 2002 a, Linear estimation of
continuous-discrete linear state space models with multiplicative noise. System Control Letters, 47, 91–101.
Jimenez, J. C., and Ozaki, T., 2002 b, An approximate innovation method for the estimation of diffusion processes from
discrete data. Technical Report 2002-184, Instituto de
Cibernetica, Matematica y Fisica, La Habana.
Jimenez, J. C., Shoji, I., and Ozaki, T., 1999, Simulation of
stochastic differential equations through the Local
Linearization method. A comparative study. Journal
Statistical Physics, 94, 587–602.
Kalman, R. E., 1960, A new approach to linear filtering and
prediction problems. Transactions of the ASME, Series D:
Journal of Basic Engineering, 82, 35–45.
Kalman, R. E., 1963, New methods in Wiener filtering theory.
In J. L. Bogdanoff and F. Kozin (Eds) Proc. Symp. Eng.
Appl. Random Function Theory and Probability (New York:
Wiley).
Kalman, R. E., and Bucy, R. S., 1961, New results in linear
filtering and prediction problems, Journal of Basic
Engineering, 83, 95–108.
Kloeden, P. E., and Platen, E., 1995, Numerical Solution of
Stochastic Differential Equations, 2nd edn (Berlin: SpringerVerlag).
Lainiotis, D. G., 1974, Estimation: A brief survey.
Information Sciences, 7, 191–202.
Liang, D. F., 1983, Comparison of nonlinear recursive filters
for systems with non-negligible non-linearities. In
C. T. Leondes (Ed.) Control and Dynamic Systems, 20,
341–401.
Magnus, J. R., and Neudecker, H., 1988, Matrix
Differential Calculus with Applications in Statistics and
Econometrics (Chichester: Wiley).
Ozaki, T., 1993, A local linearization approach to nonlinear
filtering. International Journal of Control, 57, 75–96.
Ozaki, T., 1994, The local linearization filter with application
to nonlinear system identification. In Bozdogan (Ed.)
Proceedings of the First US/Japan Conference on the
Frontiers of Statistical Modeling: An Informational
Approach (The Netherlands: Kluwer Academic Publishers),
pp. 217–240.
Ozaki, T., and Iino, M., 2000, An innovation approach to
non-Gaussian time series analysis. In G. Kitagawa and T.
1170
J. C. Jimenez and T. Ozaki
Hogushi (Eds) Proceedings of the International Symposium
on Frontiers of Time Series Analysis, ISM, Tokyo, Japan,
pp. 162–185.
Ozaki, T., and Jimenez, J. C., 2002, An innovation approach
for the estimation, selection and prediction of discretely
observed continuous-time stochastic volatility models.
Research Memorandum No. 855, The Institute of
Statistical Mathematics, Tokyo, Japan.
Ozaki, T., Jiminez, J. C., and Haggan-Ozaki, V., 2000,
Role of the likelihood function in the estimation of chaos
models. Journal of Time Series Analysis, 21, 363–387.
Ozaki, T., Jiminez, J. C., Iino, M., Shi, Z., and
Sugawara, S., 2001, Use of stochastic differential equation
models in financial time series analysis: monitoring and control of currencies in exchange market. In Proceedings of the
Japan–US Joint Seminar on Statistical Time Series Analysis,
Kyoto, Japan, June, 17–24.
Riera, J. J., Watanabe, J., Iwata, K., Miura, N.,
Aubert, E., Ozaki, T., and Kawashima, R., 2003, The
estimation of nonlinear neuronal dynamic from
BOLD signals. Research Memorandum No. 875, The
Institute of Statistical Mathematics, Tokyo, Japan.
Schwartz, L., and Stear, E. B., 1968, A computational comparison of several non-linear filters. IEEE Transactions on
Automatic Control, 13, 83–86.
Scott, L. O., 1987, Option pricing when the variance change
randomly: theory, estimation and applications. Journal of
Financial and Quantitative Analysis, 22, 419–438.
Scott, L. O., 1991, Random variance option pricing: empirical test of the model delta-sigma hedging. Advances in
Futures and Options Research, 5, 113–135.
Shoji, I., 1998, A comparative study of maximum likelihood
estimators for nonlinear dynamical systems. International
Journal of Control, 71, 391–404.
Sidje, R. B., 1998, EXPOKIT: software package for computing matrix exponentials, AMC Transactions on Mathematics
Software, 24, 130–156.
Valdes, P. A., Jimenez, J. C., Riera, J., Biscay, R., and
Ozaki, T., 1999, Nonlinear EEG analysis based on a neural
mass model. Biological Cybernetics, 81, 415–424.
Van Loan, C. F., Computing integrals involving the matrix
exponential. IEEE Transactions on Automatic Control, AC23, 395–404.
Wiggins, J. B., 1987, Options values under stochastic volatility: theory and empirical estimates. Journal of Financial
Economics, 19, 351–372.
© Copyright 2026 Paperzz