Inference for the jump part of quadratic variation of Itˆo

Inference for the jump part of quadratic variation of
Itô semimartingales
Almut E. D. Veraart∗
CREATES,
School of Economics and Management, University of Aarhus, Building 1322,
DK-8000 Aarhus C, Denmark,
Email: [email protected]
WORK IN PROGRESS
This draft: 23 January 2008
Abstract
When asset prices are modelled by Itô semimartingales, their quadratic variation consists of a continuous and a jump component. This paper is about inference on the jump part
of the quadratic variation, which we estimate by using the difference of realised variance
and realised multipower variation.
The main contribution of this paper is that we provide a bivariate asymptotic limit theory for realised variance and realised multipower variation in the presence of jumps. From
that result, we can then deduce the asymptotic distribution of the estimator of the jump
component of quadratic variation and can make inference on it. Furthermore, we present
consistent estimators for the asymptotic variances of the limit distributions which allows us
to derive a feasible asymptotic theory. Monte Carlo studies reveal a good finite sample performance of the proposed feasible limit theory, and an empirical study shows the relevance
of our result in practice.
1 Introduction
Inference on the variation of asset prices has been studied in great detail in the last decade.
Due to the fact that high frequency asset price data have become widely available, one can
now use nonparametric methods which exploit the specific structure of high frequency data to
learn about the price variation over a given period of time. While logarithmic asset prices have
often been modelled by Brownian semimartingales, the focus of research has recently shifted
towards more general models which allow for jumps in the price process. This paper follows
∗
I am grateful to Neil Shephard and Matthias Winkel for their guidance and support. Financial support by the
Rhodes Trust and by the Center for Research in Econometric Analysis of Time Series, CREATES, funded by the
Danish National Research Foundation, is gratefully acknowledged.
1
1 INTRODUCTION
2
this recent stream of research by assuming that the logarithmic asset price is given by an Itô
semimartingale of the form
dYt = bt dt + σt dWt + dJt ,
which consists of a Brownian semimartingale (bt dt + σt dWt ) and a jump component (dJt ) (the
exact assumptions and regularity conditions will be defined more precisely below).
This paper is about inference on the jump part of the quadratic variation process of the price
process. The quadratic variation (see e.g. Protter (2004)) of the price process is given by
[Y ]t = [Y ]ct + [Y ]dt ,
where
[Y
]ct
=
Z
0
t
σs2 ds
and
[Y ]dt =
X
(∆Js )2
0≤s≤t
denote the continuous and discontinuous (or jump) parts of the quadratic variation, respectively.
While inference on the continuous part of the quadratic variation has been studied in detail in
the literature (see e.g. Barndorff-Nielsen & Shephard (2002)), inference on the discontinuous
part has not been studied explicitly yet. However, an indirect way to gain information on the
jump part of quadratic variation is given by any test for the presence of jumps (e.g. BarndorffNielsen & Shephard (2006), Aı̈t-Sahalia & Jacod (2006) and Jacod & Todorov (2007)). So
Barndorff-Nielsen & Shephard (2006) have studied a related question and introduced several
non–parametric tests for testing the null hypothesis of no jumps versus the alternative of jumps.
In order to make inference on the jump part of quadratic variation, our first steps will follow
the methodology of Barndorff-Nielsen & Shephard (2006), who exploited the fact that jumps
in the asset price are reflected in a jump part of the quadratic variation and vice versa. So their
main idea was to compare two measures of variance: one which is not robust to jumps, a quantity called realised variance (see e.g. Comte & Renault (1998), Barndorff-Nielsen & Shephard
(2002), Andersen, Bollerslev, Diebold & Labys (2001)), that estimates the total variation of
the price process, and one which is robust to jumps, called realised bipower variation (see e.g.
Barndorff-Nielsen & Shephard (2004), Barndorff-Nielsen, Graversen, Jacod, Podolskij & Shephard (2006)), and only estimates the continuous part of the variance. By using the difference or
the ratio of these two quantities, they have found a consistent estimator for the jump part of the
total price variation. Furthermore, they have derived the asymptotic distribution of these test
statistics under the null hypothesis (that there are no jumps). Huang & Tauchen (2005) have
carried out an extensive simulation study based on these asymptotic results; this revealed a very
good finite sample performance of the proposed test statistics.
However, the asymptotic distribution of these test statistics under the alternative hypothesis
(that there are jumps) has not been known yet. So in order to be able to make inference on
the jump part of quadratic variation or to derive the asymptotic distribution of the test statistics
under the alternative hypothesis, we have to find the asymptotic distribution of a consistent estimator of the jump part of the quadratic variation. This is exactly the task we tackle in this paper.
Recently, the concept of bipower variation has been extended to multipower variation (see e.g.
Barndorff-Nielsen, Graversen, Jacod, Podolskij & Shephard (2006), Barndorff-Nielsen, Shephard & Winkel (2006), Woerner (2006)). Jacod (2006) has derived a central limit theorem
for realised multipower variation from realised tripower onwards (but not for realised bipower
2 SETUP
3
variation). We follow this line of research and derive the main result: If one uses multipower
variation of higher powers than two (e.g. tripower or quadpower variation) for estimating the
continuous part of the price variation, similar test statistics as the ones proposed by BarndorffNielsen & Shephard (2006) can be constructed whose distributions can be calculated when there
are jumps. So our key result is that
1
√
∆n
(Realised variance - realised multipower variation) − [Y ]d
→ Mixed Normal 0, constant
Z
σs4 ds + 2
X
s
2
!
2
σs−
+ σs (∆Js )2 ,
where the powers in the multipower variation have to satisfy some constraints as explained later
(see equation (13)).
So in a first stage, we derive infeasible limit results (as given above), which means that the
asymptotic distribution (particularly the asymptotic variance) depends on components of the
price process which we do not observe. In a next step, we replace the unobserved asymptotic
variance by a consistent estimator, which we construct by using a similar framework as the one
studied in Veraart (2007) for realised versions. So in the end, we are able to make inference on
the jump part of the quadratic variation and not only on the integrated variance.
The remaining part of the paper is structured as follows. Section 4.2 introduces the notation
and the main model assumptions. In Section 4.3, we review the most important facts about
realised variance and realised multipower variation. Section 4.4 contains the main contribution
of this paper. First, we sketch some of the important theoretical work by Jacod (2007, 2006) on
univariate asymptotic results for realised variance and realised multipower variation. Then, we
present our main result: the asymptotic distribution of a bivariate process of realised variance
and realised multipower variation in the presence of jumps. From this, we can derive the asymptotic distribution of various jump–test statistics under both the null and alternative hypotheses.
Furthermore, we show how the infeasible limit results can be converted into a feasible limit
theory. In Section 4.5, we carry out a detailed simulation study, which we use for assessing
the finite sample performance of our proposed test statistics. Furthermore, we compare test
statistics based on different powers, and we investigate the trade–off of an efficiency gain for
estimating the continuous part of the variance by multipower variation of low power (tripower,
quadpower) versus a decrease in the finite sample bias by using multipower variation of high
power (10–power, 20–power). An empirical study is then carried out in Section 4.6, where we
study some high frequency equity data and identify jumps in the price process. Finally, Section
4.7 concludes the paper and gives some prospect on future research. The proof of our main
theorem and the tables with the results from the simulation study are given in the Appendices
(Section 4.8.1 and 4.8.2, respectively).
2 Setup
We assume that the logarithmic asset price is given by a real–valued Itô semimartingale Y =
(Yt )t≥0 , which is defined on a probability space (Ω, A, (Ft )t≥0 , P) as given below, where we
use the same assumptions as in Jacod (2007).
2 SETUP
4
Hypothesis (H) Let
Yt = Y0 +
where
Z
t
bs ds +
0
Z
t
0
σs dWs + κ(δ) ⋆ (µ − ν)t + κ′ (δ) ⋆ µt ,
(1)
• W is one–dimensional (Ft )t≥0 –Brownian motion and µ is a Poisson random measure on R+ × R;
• κ a continuous truncation function which is bounded, has compact support and
κ(x) = x in a neighbourhood of 0 and κ′ (x) = x − κ(x);
• ν denotes the compensator of the jump measure µ of X and ν(dt, dx) = dtFt (dx).
• Let δ denote a predictable map from Ω × R+ × R on R. Then Ft (ω, dx) is the image
of the Lebesgue measure on R by the map x 7→ δ(ω, t, x);
• ν(ds, dx) = ds ⊗ dx denotes the predictable compensator of µ.
R
• The processes (bt ) and (1 ∧ x2 )Ft (dx) are locally bounded (Ft )t≥0 –predictable,
and the process (σt ) is càdlàg adapted.
Note that every Itô semimartingale admits a representation as in (1) where µ is a Poisson random
measure and ν(ds, dx) = ds ⊗ dx. Further note that σ and W can be dependent in this general
model framework and, hence, our model accounts for the leverage effect.
Another assumption is concerned with the jump part of the semimartingale.
Hypothesis (K) We assume that (H) is satisfied and that the coefficient δ (see (1)) satisfies
|δ(ω, t, x)| ≤ γkR(x) for all t ≤ Tk (ω), where γk denote some deterministic functions on
R which satisfy (1 ∧ x2 ) ◦ γk (x)dx < ∞ and (Tk ) are stopping times increasing to +∞.
We need also an assumption for the volatility process. In this paper, we shall focus on
volatility processes which satisfy the following conditions.
Hypothesis (L-s) (H) holds and the volatility process σ has the form
Z t
Z t
Z t
e
e ⋆ (µ − ν)t + κ′ (δ)
e ⋆µ ,
σt = σ0 +
bu du +
σ
eu dWu +
σ
eu′ dWu′ + κ(δ)
t
0
0
0
and
• W ′ is another Brownian motion on the space (Ω, A, (Ft )t≥0 , P), which is independent of W ;
• the process (ebt ) is optional and locally bounded;
• the processes (bt ), (e
σt ), (e
σt′ ) are adapted left–continuous with right limits in t, and
locally bounded;
• the functions δ(ω, t, x) and δ̃(ω, t, x) are predictable and left–continuous with right
limits in t. Also, |δ(ω, t, x)| ≤ γk (x) and |δ̃(ω, t, x)|R ≤ γ̃k (x) for all t ≤ Tk (ω),
where γk , γ̃k are deterministic functions on R with φs ◦ γk (x)dx < ∞ (where
0
we
the s comes in — and
R define 0 = 0) — note that this is the condition where
φ2 ◦ γ̃k (x)dx < ∞. We define φs by φs (x) = 1 ∧ |x|s if 0 < s < ∞ and by
φs (x) = 1IR\{0} (x) if s = 0. Furthermore, (Tk ) denotes a sequence of stopping times
increasing to +∞.
3 REALISED VARIANCE AND MULTIPOWER VARIATION
5
So under (H) and (L-s) for any s ∈ [0, 2], we essentially consider a Brownian semimartingale
with drift and jumps. Note that we assume in (L-s) that s ∈ [0, 2]. If s ≤ s′ ≤ 2, then (L-s)⇒
(L-s′ )⇒ (K) ⇒(H). Also note that (L-0) implies that X has locally finitely many jumps and if
X is continuous, then all hypotheses (L-s) are identical for all s ∈ [0, 2] (see Jacod (2007, p.6)).
Finally, we formulate a hypothesis which guarantees that the semimartingale has a Brownian
semimartingale component which is nowhere degenerate.
2
Hypothesis (H’) Hypothesis (H) holds and (σt2 ) and (σt−
) do not vanish.
For our asymptotic theory, we need some further notation, which follows Jacod (2007)’s
framework. Let (Ω′ , A′, P′ ) denote an auxiliary space which supports two Brownian motions
f , two sequences of N (0, 1) random variables, denoted by (Up ) and (Up )′ and, further, a
W and W
sequence of random variables (ξp ) which are uniformly distributed on [0, 1]. All these processes
are assumed to be mutually independent. Now we extend our original probability space and we
write
e = Ω × Ω′ ,
Ω
Ae = A ⊗ A′ ,
e = P ⊗ P′ .
P
f, Up , . . .
One can now extend, in the obvious way, the variables Yt , bt , . . . defined on Ω and W , W
′
e denote the expectation
defined on Ω to the product space (without change of notation). Let E
e
with respect to P. Further, let (Tp ) denote stopping times which are an enumeration of the
jump times of Y . Finally, we write (Fet ) for the smallest right–continuous filtration of Ae which
contains (Ft ) and with respect to which W is adapted and, further, such that Up , Up′ and ξp are
FeTp –measurable for all p.
e which also holds for
f are (Fet )t≥0 -Brownian motions under P,
Straightforwardly, W and W
W and W ′ . Further µ is a Poisson measure with compensator ν for the bigger filtration.
3 Review of Realised Variance and Realised Multipower Variation
After having introduced the admittedly quite tedious notation for the continuous–time price
process, we now turn our attention to its discrete–time observations.
Let us assume that we observe the process Y over an interval [0, t] at times i∆n for ∆n > 0
and i = 0, . . . , [t/∆n ]. So for its discretely observed increments, we write
∆ni Y = Yi∆n − Y(i−1)∆n
for i = 1, . . . [t/∆n ].
In practice, these increments are used to construct estimators for the variance or integrated
variance. For example, it is well–known that the realised variance, which is the sum of the
squared increments, estimates the quadratic variation of the underlying process consistently,
i.e.
[t/∆n ]
X
ucp
n
RVt =
(∆ni Y )2 −→ [Y ]t , as n → ∞,
i=1
where the convergence is uniformly on compacts in probability (ucp) (see Protter (2004, p. 57),
Andersen, Bollerslev, Diebold & Ebens (2001) and Barndorff-Nielsen & Shephard (2002)).
3 REALISED VARIANCE AND MULTIPOWER VARIATION
6
Besides, one can use the realised bipower variation (as defined by Barndorff-Nielsen &
Shephard (2004, 2006)) for estimating the continuous
part of the quadratic variation of Itô
p
semimartingales (see Jacod (2006)). So for µ1 = 2/π, one obtains
[t/∆n ]−1
µ−2
1
X
i=1
|∆ni Y
Z
ucp
| ∆ni+1 Y −→ [Y c ]t =
t
0
σs2 ds,
as n → ∞.
This concept can be further generalised to realised multipower variation (see e.g. BarndorffNielsen, Graversen, Jacod, Podolskij & Shephard (2006) for a treatment of realised multipower
variation in the absence of jumps and Woerner (2006) and Jacod (2006) for the corresponding
results in the presence of jumps). Let r = (r1 , . . . , rI ) be a multi–index with ri > 0. Further,
we write |r| = r1 + · · · + rI and r+ = max1≤i≤I ri and r− = min1≤i≤I ri . Let
µr = E|U|r ,
Then
∆n1−|r|/2 µ−1
r
[t/∆n ]−I I
X Y
i=1
where µr =
QI
j=1 µrj .
j=1
for U ∼ N(0, 1).
ucp
|∆ni+j−1 Y |rj →
Z
0
t
|σu ||r|du, as n → ∞,
Now we define
RMP V
(r)nt
[t/∆n ]−I I
X Y
[t/∆n ]
1−|r|/2 −1
=
∆n
µr
|∆ni+j−1 Y |rj ,
[t/∆n ] − I
i=1
j=1
where the factor [t/∆n ]/([t/∆n ] − I) accounts for the fact that there are just ([t/∆n ] − I)
terms in the sum rather than [t/∆n ] summands as in the realised variance. By multiplying
the realised multipower variation by the factor above, we make it more easily comparable to
realised variance. In particular, when studying the difference of those two quantities, we do not
end up with a finite sample bias which is just caused by the fact that we are comparing two
similar sums with a different number of summands. Clearly,
Z t
n ucp
RMP V (r)t →
|σu ||r| du, as n → ∞.
0
1−|r|/2
Note that if |r| = 2, then the factor ∆n
= 1 and, hence, it disappears. In particular, we are
interested in realised multipower variations with equal power ri . So we define for k, I ∈ N:
RMP V
(k; I)nt
[t/∆n ]−I I
X Y
[t/∆n ]
1−k/2 −I
=
|∆ni+j−1 Y |k/I .
∆n
µk/I
[t/∆n ] − I
i=1
j=1
Then
RMP V
ucp
(k; I)nt −→
Z
0
t
|σu |k du, as n → ∞.
In particular we are interested in the case k = 2 when
RMP V
(2; I)nt
Z t
[t/∆n ]−I I
X Y
[t/∆n ]
−I
n
2/I ucp
=
µ
|∆
Y | −→
σu2 du, as n → ∞.
[t/∆n ] − I 2/I i=1 j=1 i+j−1
0
3 REALISED VARIANCE AND MULTIPOWER VARIATION
Then, clearly,
ucp
RVtn − RMP V (2; I)nt −→ [Y ]dt ,
7
as n → ∞.
So, the difference of realised variance and realised multipower variation is a consistent estimator
for the jump part of the total variation.
Estimating the jump component of the total variation is hence a fairly easy task. However,
things get significantly more complicated when we want to make inference on the jump component, which requires establishing an appropriate asymptotic theory. Let us first review some
univariate asymptotic results for realised variance and realised multipower variation, which have
been proven under the assumption that the price process has no jumps and, hence, is just given
by a Brownian semimartingale.
From Barndorff-Nielsen & Shephard (2002, 2007b), we know that we obtain the following
central limit result for realised variance in the absence of jumps. As n → ∞,
Z t
Z t
Z t
1
stably in law √
2
4
n
2
√
σu dBu ∼ MN 0, 2
σs ds
RVt −
σs ds
−→
2
∆n
0
0
0
stably in law as a process (for the definition of stable convergence as a process see e.g. Jacod & Shiryaev (2003)) where the two letters MN stand for mixed normal distribution. From
Barndorff-Nielsen, Graversen, Jacod, Podolskij & Shephard (2006), we get the following result for realised multipower variation in the absence of jumps. Under (L-s) for s < 1 and
s
< r− < r+ < 1 and as n → ∞,
2−s
1
√
∆n
Z t
Z t
stably in law −1 p
n
|r|
eu
RMP V (r)t −
|σs | ds
−→
µr
A(r)
|σu ||r| dB
0
0
Z t
−2
2|r|
∼ MN 0, µr A(r)
σs ds ,
0
stably in law, where
A(r) =
I
Y
i=1
µ2ri − (2I − 1)
I
Y
µ2ri
i=1
+2
I−1 Y
i
X
µ rj
i=1 j=1
I
Y
µ rj
j=I−i+1
I−i
Y
µrj +rj+i ,
j=1
where an empty product is set to 1. Note that an analogous result also holds in the presence
of jumps as shown by Woerner (2006) and Jacod (2006). So in our special case of multipower
variation, we get as n → ∞,
Z t
Z t
q
1
stably in law
n
2
2 −I
eu
√
ωI µI/2
σu2 dB
RMP V (2; I)t −
σs ds
→
∆n
0
0
Z t
2 −2I
4
∼ MN 0, ωI µI/2
σs ds ,
0
where
ωI2
=
µI4/I
+ (1 −
2I)µ2I
2/I
+2
I−1
X
j=1
2j
µI−j
4/I µ2/I .
4 CENTRAL LIMIT THEOREMS IN THE PRESENCE OF JUMPS
8
e which appear in the limit processes, are indeNote that both the Brownian motions B and B,
pendent of σ and W .
In the flavour of Barndorff-Nielsen & Shephard (2006), we can now construct test statistics
based on the difference of realised variance and realised multipower variation. So we get
Z t
p Z t
1
2
4
n
n stably in law
√ (RVt − RMP V (2; I)t ) →
θI
σs dW̄s ∼ N 0, θI
σs ds ,
(2)
∆n
0
0
where
2
θI = µ−2I
2/I ωI − 2.
This results builds the base for a linear test statistic to test for jumps in the price process.
From Slutsky’s lemma one can derive a ratio test statistic:


Rt 4
1
RMP V (2; I)nt
stably in law
 θI σ ds 
√
−1
→ N 0, R 0 s 2  .
n
t 2
RVt
∆n
σ ds
0 s
(3)
And, finally, we can apply the delta method and derive the corresponding result for the log–
transformation:


Rt 4
σ ds 
1
stably in law

√ (log (RVtn ) − log (RMP V (2; I)nt ))
(4)
−→
N 0, θI R 0 s 2  ,
t 2
∆n
σ ds
0
s
which can be used for constructing a log–linear test statistic.
However, these limit results only hold under the null hypothesis that there are no jumps.
The main contribution of this paper is that we derive the asymptotic distribution of these test
statistic also under the alternative hypothesis that there are jumps. More generally, we study
the asymptotic properties of the bivariate process of realised variance and realised multipower
variation.
4 Central Limit Theorems in the Presence of Jumps
Let Y be our general real–valued semimartingale as defined above, which has both a Brownian
semimartingale and a jump component. We are interested in studying the asymptotic properties
of the bivariate process
1
RVtn − [Y ]∆n [t/∆n ]
√
.
(5)
n
c
∆n RMP V (2; I)t − [Y ]t
From Jacod (2007, 2006), we already know the univariate limit results for both components.
Realised variance: Assume that (L-2) is satisfied. Then as n → ∞ we get from Jacod (2007,
Theorem 2.11 (ii))
stably in law (1)
1
(2)
√
RVtn − [Y ]∆n [t/∆n ]
−→
Lt + Lt ,
∆n
(6)
4 CENTRAL LIMIT THEOREMS IN THE PRESENCE OF JUMPS
9
where the convergence is stably in law as a process. The limiting process is given by
(1)
(2)
Lt + Lt , where
√ Z t 2
(1)
Lt = 2
(7)
σu dW u ,
0
p
X
p
(2)
Lt = 2
∆YTp
(8)
ξp Up σTp− + 1 − ξp Up′ σTp .
p: Tp ≤t
Furthermore, Jacod (2007) makes the following remarks:
• Stable convergence in law only holds when the discretised process [Y ]∆n [t/∆n ] is
1
used in (6). However, √∆
(RVtn − [Y ]t ) converges finite–dimensionally stably in
n
law to the limit described above (see Jacod (2007, Remark 2.14)). But the latter
result will be sufficient for us since we are interested in making inference on the
jump part of the quadratic variation at a fixed time t.
• The processes (7) and (8) define semimartingales on the extended space.
• Conditionally on A, L(1) and L(2) are independent and L(1) is a martingale with
Gaussian law, and if Y and σ do not jump together, L(2) is also a martingale with
Gaussian law. Their variances are given by ((Jacod 2007, p. 8))
Z t
2 (1)
e
A = 2
σu4 du,
(9)
E
Lt
0
2 X
2 2
(2)
2
e
(10)
σTp + σTp− .
∆YTp
E
Lt
A = 2
p: Tp ≤t
So conditionally on A, the asymptotic variance of the bias between realised variance
and quadratic variation is given by
Z t
X
2 2
(11)
σTp + σT2p− .
∆YTp
2
σu4 du + 2
0
p: TP ≤t
• When there are no jumps, the limit is given by (7), which is a well–known result, e.g.
Jacod (1994), Jacod & Protter (1998) and Barndorff-Nielsen & Shephard (2002).
Realised multipower variation: The asymptotic distribution of multipower variations in the
presence of jumps has first been derived by Woerner (2006). A later study by Jacod (2006,
Theorem 6.2) contains the following result. Assume that (L-s) holds for some s < 1 and
s
that we have (H’). Furthermore let r be a multi–index such that 2−s
< r− ≤ r+ < 1.
Then, as n → ∞,
Z t
Z t
1
stably in law −1 p
n
|r|
fu ,
√
A(r)
|σu ||r|dW
RMP V (r)t −
|σu | du
−→
µr
∆n
0
0
stably in law as a process.
In the next section, we combine these two results and derive a bivariate limit result, which is the
main contribution of this paper.
4 CENTRAL LIMIT THEOREMS IN THE PRESENCE OF JUMPS
10
Remark We suppose that it is possible to derive a central limit theorem for realised bipower
variation in the presence of jumps. However, the central limit theorem for realised bipower
variation will differ from the ones for realised tripower, realised quadpower etc.. As mentioned
in Barndorff-Nielsen, Shephard & Winkel (2006, Section 3.1), the limit process will exhibit
a jump component in addition to the Brownian semimartingale component. So we expect to
obtain a similar central limit result to that for realised variance. This aspect will be studied in
more detail in future research.
4.1 Main Result
Let (Yt )t≥0 denote a one–dimensional semimartingale.
s
Theorem 4.1 Assume (L-s) for some s < 1, (H’) and let r be a multi–index such 2−s
< r− ≤
r+ < 1. Then
1
RVtn − [Y ]∆n [t/∆n ]
Rt
√
n
|r|
∆n µ−1
r RMP V (r)t − 0 |σu | du
√ Rt 2
!
p
p
P
′
σ
dW
2
+
2
ξ
U
σ
+
1
−
ξ
U
σ
∆Y
stably in law
u
p p Tp−
p p Tp
Tp
Tp ≤t
0 u
√ R t p: |r|
−→
,
√ Rt
|r| f
2 0 σu dW u + θr 0 |σu | dWu
where the convergence is stable in law as a process and θr = (µ−1
r
p
A(r))2 − 2.
If σ and Y do not jump together, the first component is the sum of two independent martingales
which have, conditional on A, Gaussian law. Note that in that case σTp− = σTp since Tp are the
jump times of Y .
Remark The one–dimensional limit result for the multipower variation holds as soon as (L-s)
s
< r− ≤ r+ < 1. In order to obtain the limit result for
for some s < 1, (H’) hold and 2−s
the realised variance, we need the assumption (L-2) which is clearly implied by (L-s) for some
s < 1.
Corollary 4.2 Assume (L-s) for some s < 1, (H’) and that Y and σ have no common jumps.
For I ∈ N with 2 < I < 2s (2 − s), we obtain:
RVtn − [Y ]∆n [t/∆n ]
Rt
RMV P (2; I)nt − 0 σu2 du
√ P
√ Rt 2
p
p
!
2 0 σu dW u + 2 p: Tp ≤t ∆YTp σTp
ξp Up + 1 − ξp Up′
stably in law
√ Rt 2
, (12)
−→
√ Rt
fu
2 0 σu dW u + θI 0 σu2 dW
1
√
∆n
where (12) has, conditionally on A, Gaussian law with zero mean and variance
!
2
Rt
Rt
P
2 0 σu4 du + 4 p: Tp ≤t ∆YTp σT2p
2 0 σu4 du
Rt
Rt
,
2 0 σu4 du
(2 + θI ) 0 σu4 du
−2I 2
where θI = µ2/I
ωI − 2.
4 CENTRAL LIMIT THEOREMS IN THE PRESENCE OF JUMPS
11
The following corollary contains the result which is of most importance in applications and can
be regarded as key result of this paper.
Corollary 4.3 Assume (L-s) for some s < 1, (H’) and that Y and σ have no common jumps.
For I ∈ N with 2 < I < 2s (2 − s) we obtain:
1
stably in law
√ (RVtn − RMP V (2; I)nt − [Y ]d∆n [t/∆n ] )
−→
Lt ,
∆n
(13)
where Lt has, conditionally on A, Gaussian law with zero mean and variance given by
Z t
X
2
∆YTp σT2p .
θI
σu4 du + 4
0
p: TP ≤t
Proof Let c = (1, −1)′ . Then
1
RVtn − [Y ]∆n [t/∆n ]
′ 1
R
c√
= √ ((RVtn − RMV P (2; I)nt ) − [Y ]d∆n [t/∆n ] ).
t 2
n
∆n RMV P (2; I)t − 0 σu du
∆n
The expression above converges
stably in a law as a process to a process which, conditionally
1
on A, is N 0, (1, −1)Mt −1 –distributed, where
Mt =
2
Rt
σ 4 du
0 u
!
2 2
Rt 4
P
2 0 σu du
+ 4 p: TP ≤t ∆YTp σTp
Rt 4
Rt
.
2 0 σu du
(2 + θI ) 0 σu4 du
4.2 Distribution of the Test Statistics under the Alternative Hypothesis
Now we have all results for constructing feasible test statistics and for deriving their distributions under both the null and the alternative hypothesis (assuming that Y and σ have no common
jumps and that the assumptions of Theorem 4.1 are satisfied).
• For the linear test (see (2)), we obtain the following limit result under the alternative
hypothesis, i.e. in the presence of jumps:
1
√ (RVtn − RMP V (2; I)nt − [Y ]d∆n [t/∆n ] )
∆n
converges stably in law as a process to a process, which, conditionally on A, has Gaussian
law with zero mean and variance
Z t
X
θI
σs4 ds + 4
σs2 (∆Ys )2 .
0
0≤s≤t
4 CENTRAL LIMIT THEOREMS IN THE PRESENCE OF JUMPS
12
• For the ratio test (see (3)), we obtain the following limit result under the alternative hypothesis:
!
[Y ]d∆n [t/∆n ]
1
RMP V (2; I)nt
√
−1+
RVtn
RVtn
∆n
converges stably in law as a process to a process, which, conditionally on A, has Gaussian
law with zero mean and variance
Rt
P
θI 0 σs4 ds + 4 0≤s≤t σs2 (∆Ys )2
R
2 .
P
t 2
2
σ
ds
+
(∆Y
)
s
0≤s≤t
0 s
• And from the bivariate delta method (for a bivariate function g(x, y) = log(x) − log(y)),
we deduce the distribution of (4) under the alternative hypothesis:
1
√ (log (RVtn ) − log (RMP V (2; I)nt ) − (log [Y ]∆n [t/∆n ] − log [Y ]c∆n [t/∆n ] )
∆n
converges stably in law as a process to a process, which, conditionally on A, has Gaussian
law with zero mean and variance
Z t
4 X 2
4
(2 + θI )
2
σu4 du +
−
+
σs (∆Ys )2 .
2
c
c 2
2
[Y ]t
[Y ]t [Y ]t
([Y ]t )
[Y ]t 0≤s≤t
0
4.3 Feasible Standard Errors
In order to derive feasible test statistics, we need estimators for the asymptotic variances. From
Barndorff-Nielsen & Shephard (2002) and Jacod (2006), we know that the continuous part of the
asymptotic variance can be consistently estimated by special cases of the realised multipower
variation — even in the presence of jumps. For I ≥ 3
Z t
1
n ucp
RMP V (4; I)t −→
σs4 ds.
∆n
0
So, how can we estimate the jump part of the asymptotic variance? From Veraart (2007), we
know that one can use an estimator which is based on the difference of a generalised version of
realised variance and realised multipower variation. In particular, we write Kn for a sequence
which satisfies
Kn → ∞
and
∆n Kn → 0
as n → ∞.
We can extend a result by Lee & Mykland (2006) and write
2(−)
σ̂(i−1)∆n =
2(+)
σ̂(i−1)∆n
µ−2
1
(Kn − 2) ∆n
i−2
X
j=i−Kn +1
n n
∆j X ∆j+1 X ,
i+K
n −1
X
n n
µ−2
1
∆ X ∆ X .
=
j+1
j
(Kn − 2) ∆n j=i+2
4 CENTRAL LIMIT THEOREMS IN THE PRESENCE OF JUMPS
13
for the locally averaged realised bipower variation. From Veraart (2007), we know that
Z t
X 2(−)
X
2(+)
2 P
n
2
σ̂(i−1)∆n + σ̂(i−1)∆n (∆i Y ) −→ 2
σs4 ds +
σs−
+ σs2 (∆Ys )2 ,
t/∆n
0
i=1
0≤s≤t
as n → ∞ and, therefore,
X 2(−)
1
2(+)
σ̂(i−1)∆n + σ̂(i−1)∆n (∆ni Y )2 − c2
c1
RMP V (4; I)nt
∆n
i=1
Z t
X
P
2
−→ (2c1 − c2 )
σs4 ds + c1
σs−
+ σs2 (∆Ys )2 ,
t/∆n
0
0≤s≤t
for constants c1 , c2 with 2c1 ≥ c2 . And in order to make sure that the variance estimator is
always positive, we use

 t/∆
Xn 2(−)
1
2(+)
n
RMP V (4; I)nt ,
σ̂(i−1)∆n + σ̂(i−1)∆n (∆ni Y )2 − c2
Ât (c1 , c2 ) = max c1

∆
n
i=1
1
n
(2c1 − c2 ) RMP V (4; I)t .
∆n
So e.g. we obtain for n → ∞:
Ânt (2, 4
− θI ) → θI
Z
t
0
σs4 ds + 2
X
0≤s≤t
2
σs−
+ σs2 (∆Ys )2 .
Clearly, in the absence of common jumps of Y and σ, we could also use the slightly simpler
estimator of the asymptotic variance given by


 t/∆

Xn 2(−)
1
1
max 2c1
σ̂(i−1)∆n (∆ni Y )2 − c2
RMP V (4; I)nt , (2c1 − c2 ) RMP V (4; I)nt .


∆n
∆n
i=1
Now we can define feasible standard errors for the linear test statistic, the ratio test statistic
and the log–linear test statistic under the alternative hypothesis, that there are jumps. Let I, I˜
denote positive integers which are greater or equal to 3.
Feasible standard error for linear test:
˜ n)
1 (RVtn − RMP V (2; I)
t
√
q
.
∆n
n
 (4, 4 − θ )
I
t
Feasible standard error for ratio test:
1
√
∆n
˜n
RMP V (2; I)
t
−1
RVtn
!
q
RVtn
Ânt (4, 4
.
− θI )
5 SIMULATION STUDY
14
Feasible standard error for log–linear test:
n
˜n
1
˜ n ) RVtqRMP V (2; I)t ,
√ (log (RVtn ) − log RMP V (2; I)
t
∆n
Ân (c , c )
t
where
1
2
c1 = 4(RMP V (2; I)nt )2 ,
c2 = 4(RMP V (2; I)nt )2 − (2(RMP V (2; I)nt )2 − 4RVtn RMP V (2; I)nt + (2 + θI )(RVtn )2 ).
5 Simulation Study
In this section, we will study the finite sample performance of our test statistics by carrying out
a detailed simulation study.
So far, we have seen that we can use any realised multipower variation RMP V (2; I)nt with
I ≥ 3 for constructing a test for jumps because from the tripower onwards all multipower
s
variations (which satisfy 2−s
< 2/I < 1 for sufficiently small s) are robust toward jumps. So,
which multipower variation shall we choose to construct our test statistic?
Basically, we are confronted with the following trade–off: We know that in the absence of
jumps, realised variance is the most efficient consistent estimator of integrated variance. Using
higher multipower variation in such a model setting results in an efficiency loss. Recall that for
U ∼ N(0, 1) we have:
√
r Γ 1 (r + 1)
2
2
√
µr = E|U|r =
,
π
I−1
X
2j
2
I
2I
ωI = µ4/I + (1 − 2I)µ2/I + 2
µI−j
4/I µ2/I ,
θI =
µ−I
2/I
q
ωI2
2
j=1
2
− 2 = µ−2I
2/I ωI − 2.
So when we look at the values of θI for various I in Table 1, we see that θI increases for
increasing I, which describes the loss in efficiency.
I
θI
1
2
3
4
5
6
0 0.608 1.061 1.377 1.605 1.776
∞
2.934
Table 1: Different values for θI
It is interesting to see that θI actually converges to a finite number
lim θI =
I→∞
π2
− 2,
2
so the loss in efficiency is bounded, which can also be seen in Figure 1.
5 SIMULATION STUDY
15
2.5
2
1.5
Theta
1
0.5
0
2
4
6
8
10
12
14
16
18
20
I
Figure 1: Different values for θI
Given these results, we might be tempted to focus on tripower variation only for estimating
integrated variance in the presence of jumps since it is the multipower variation of lowest power
(hence the most efficient one) which is robust towards jumps when we study its asymptotic
distribution.
However, there is also another issue which is worth studying a bit further. That is the
problem of the finite sample bias when we consider the difference of realised variance and
realised multipower variation.
5.1 Finite Sample Bias — A Jump–Diffusion Model
So we know that in theory one can use the difference between the realised variance and any
multipower variation of power 2/I for I = 3, 4, . . . . From a statistical point of view, we might
want to use tripower variation since it is the most efficient statistic. However, in this section,
we study the finite sample bias between realised variance and multipower variation and show
that it seems to get smaller when I increases, which means that using tripower variation would
result in the biggest possible finite sample bias one can find.
5 SIMULATION STUDY
16
Before we focus on more advanced models in our simulation study, we study a very basic
model in this section: a Brownian motion with jumps. Let
Yt = Wt +
Nt
X
ci ,
i=1
where W is a Brownian motion, Nt is a Poisson process with intensity 1 and i.i.d. ci ∼ N(0, σp2 )
and t ∈ [0, 1]. In order to simplify the exposition, we set t = 1. Let Iin = ((i − 1)∆n , i∆n ] for
i = 1, . . . , M, where M = [1/∆n ]. Hence, for j = 1, 2, . . . we have
P(There are j jumps in Iin ) = P (Ni∆n − N(i−1)∆n = j) =
e−λ∆n (λ∆n )j
.
j!
So the mean of realised variance, realised multipower variation and the sum of the squared
jumps can be easily derived.
Realised variance: For the realised variance, we get
!
M
X
1
2
2
n
n
2
n
,
E (RV1 ) = E
(∆i Y ) = ME (∆i Y ) = 1 + λσp ∆n
∆
n
i=1
since
E (∆ni Y )2 =
J
X
j=0
=
J
X
j=0
P(Ni∆n − N(i−1)∆n = j) E ∆ni W +
j
X
ci
i=1
!2
e−λ∆n (λ∆n )j
(∆n + jσp2 ) = 1 + λσp2 ∆n .
j!
Realised multipower variation: Let I = 3, 4, . . . . From the independence and stationarity of
the increments, we get for an i ∈ {1, . . . , M − I}:
!
M
−I Y
I−1
X
M
E (RMP V (2; I)n1 ) = E
|∆n ′ Y |2/I
µ−I
M − I 2/I i=1 i′ =0 i+i
I
2/I
n
= Mµ−I
2/I E |∆i Y |
!I
X
∞
−λ∆n
j
1
e
(λ∆n )
1/I
=
∆n + jσp2
,
∆n
j!
j=0
since
∞
X
2/I 2/I
n
n
=
E |∆i Y | Ni∆n − N(i−1)∆n = j P(Ni∆n − N(i−1)∆n = j)
E |∆i Y |
j=0
=
∞
X
e−λ∆n (λ∆n )j
j=0
j!

2/I 
j
∞
X
1/I
 X e−λ∆n (λ∆n )j
n

cj ′ =
E ∆i W +
µ2/I ∆n + jσp2
.
j!
′
j =1
j=0
5 SIMULATION STUDY
17
Sum of squared jumps:
E
N1
X
c2i
i=1
!
=
∞
X
J=0
jσp2
e−λ λj
= σp2 λ.
j!
Bias: So we obtain
!!
N1
X
1
E √
RV1n − RMP V (2; I)n1 −
c2i
∆n
i=1


!I
X
∞
−λ∆n
j
1
e
(λ∆n )
1
1 
1/I
−
∆n + jσp2
− λσp2 
1 + λσp2 ∆n
=√
∆n
∆n
j!
∆n
j=0


!I
1/I
∞
2
−λ∆
j
n
Xe
jσp
1 
(λ∆n )
1
1
√
−
∆n
1+
− λσp2  .
1 + λσp2 ∆n
∆
∆
j!
∆
∆n
n
n
n
j=0
Clearly, the bias converges to 0 when ∆n → 0. However, if we fix ∆n , then the bias converges
to the following expression
P



j
2
∞ (λ∆n ) log(1+jσp /∆n )
1 
1
1
j=0
j!
 − λσp2  ,
√
−
∆n exp 
1 + λσp2 ∆n
∆n
∆n
exp(λ∆n )
∆n
when I → ∞. In the following, we provide two tables and plots of the finite sample bias of
the linear test under the hypothesis that there are jumps for various values of I and for different
jump sizes. For the jump intensity we have chosen λ = 0.6, which results in one jump on [0, 1]
and λ = 1.6, which results in two jumps in [0, 1].
λ = 0.6
M\I
39
78
390
1560
23400
3
-0.33
(-0.20)
-0.35
(-0.21)
-0.38
(-0.22)
-0.33
(-0.20)
-0.21
(-0.14)
4
-0.3
(-0.18)
-0.31
(-0.19)
-0.33
(-0.18)
-0.25
(-0.15)
-0.12
(-0.09)
5
-0.28
(-0.18)
-0.29
(-0.18)
-0.3
(-0.16)
-0.22
(-0.13)
-0.09
(-0.07)
λ = 1.6
6
-0.27
(-0.17)
-0.27
(-0.17)
-0.28
(-0.15)
-0.2
(-0.12)
-0.07
(-0.06)
3
-0.7
(-0.54)
-0.71
(-0.58)
-0.71
(-0.59)
-0.66
(-0.53)
-0.47
(-0.38)
4
-0.65
(-0.50)
-0.64
(-0.53)
-0.57
(-0.49)
-0.51
(-0.41)
-0.3
(-0.24)
5
-0.62
(-0.48)
-0.6
(-0.50)
-0.51
(-0.44)
-0.44
(-0.35)
-0.24
(-0.19)
6
-0.59
(-0.47)
-0.58
(-0.48)
-0.47
(-0.41)
-0.4
(-0.32)
-0.2
(-0.16)
Table 2: Estimated bias (theoretical bias) for σp2 = 0.1. The estimation results are based on
5000 replications. Recall that M = 1/∆n and that I denotes the power in RMP V (2; I)n1 .
In Table 2 and Figure 2, we study the finite sample bias when the jump size distribution
is drawn from a N(0, 0.1) distribution, whereas, in Table 3 and Figure 3, we allow for bigger
5 SIMULATION STUDY
18
–0.06
–0.08
–0.1
–0.12
–0.14
–0.16
–0.18
0
3
4
0.2
5
0.4
6
i
7
0.6
8
Delta
0.8
9
10
1
Figure 2: Finite sample bias for λ = 0.6 and σp2 = 0.1.
jumps and simulate the jump size from a N(0, 1) distribution. It is striking that the finite sample
bias is quite big — particularly when we allow for many/big jumps and when M is still quite
small. However, we also observe that by using multipower variation of higher powers, the bias
seems to shrink significantly.
5.2 Assessing the Finite Sample Performance of Various Test Statistics
Now we turn our attention to slightly more advanced models and assess the finite sample performance of the linear, ratio and log–linear test statistic both in the absence and in the presence
of jumps.
5.2.1 Simulation Design
Our simulation design follows the one described in Huang & Tauchen (2005). Note that in
this simulation framework, all the parameters are chosen in such a way that they correspond
5 SIMULATION STUDY
19
λ = 0.6
M\I
39
78
390
1560
23400
3
4
-1.27
-1.08
(-0.72) ( -0.59)
-1.17
-0.94
(-0.68) (-0.54)
-0.99
-0.75
( -0.58) (-0.42)
-0.83
-0.56
( -0.48) (-0.32)
-0.51
-0.26
( -0.32) (-0.17)
λ = 1.6
5
6
-0.98
-0.92
(-0.53) ( -0.50)
-0.82
-0.76
( -0.48) (-0.44)
-0.64
-0.57
(-0.35) (-0.31)
-0.45
-0.39
(-0.25) (-0.21)
-0.18
-0.13
( -0.12) (-0.10)
3
4
-2.59
-2.19
(-2.02) ( -1.67)
-2.44
-1.97
(-1.89 ) (-1.51)
-1.96
-1.44
(-1.57 ) (-1.13 )
-1.59
-1.04
( -1.30 ) ( -0.86 )
-1.04
-0.55
(-0.86 ) ( -0.47 )
5
-1.98
(-1.50)
-1.75
(-1.32)
-1.2
( -0.94 )
-0.82
( -0.68)
-0.38
(-0.33)
6
-1.85
( -1.40)
-1.62
(-1.22)
-1.07
(-0.84)
-0.71
(-0.58)
-0.3
( -0.27)
Table 3: Estimated bias (theoretical bias) for σp2 = 1. The estimation results are based on 5000
replications. Recall that M = 1/∆n and that I denotes the power in RMP V (2; I)n1 .
to parameter values which one can find in real data (see the references in Huang & Tauchen
(2005)). We simulate asset price data from three different models:
Constant volatility jump diffusion
dYt = µdt + exp(β0 + β1 v)dWtY + dLJt ,
Stochastic volatility jump diffusion
dYt = µdt + exp(β0 + β1 vt )dWtY + dLJt ,
dvt = αv vt dt + dWtv ,
where W Y , W v are standard Brownian motions with Corr(dW Y , dW v ) = ρ, vt is the
stochastic volatility factor, LJt compound Poisson process with constant jump intensity λ
2
and jump size distribution N(0, σjmp
).
Two–factor stochastic volatility model
(1)
(2)
dYt = µdt + s– exp(β0 + β1 vt + β2 vt )dWtY ,
(1)
(1)
(2)
(2)
(1)
dvt = αv(1) vt dt + dWtv ,
(2)
(2)
dvt = αv(2) vt dt + (1 + βv(2) vt )dWtv ,
(1)
(2)
where vt is a Gaussian process, vt exhibits a feedback term in the diffusion function,
s– exp is the usual exponential function with a polynomial function splined in at very high
values of its argument to ensure that for βv(2) 6= 0 the growthconditions (fora solution
(1)
= ρ1 and
to exist and the Euler scheme to work) are satisfied and Corr dW Y , dW v
(2)
= ρ2 .
Corr dW Y , dW v
5 SIMULATION STUDY
20
–1.4
–1.6
–1.8
–2
0
3
4
0.2
5
0.4
6
i
7
0.6
8
Delta
0.8
9
10
1
Figure 3: Finite sample bias for λ = 1.6 and σp2 = 1.
Following Huang & Tauchen (2005), we choose as basic unit of time for our simulation one
day. We then simulate the diffusion parts in the stochastic volatility models based on the Euler
scheme, where we choose the increment of one second per tick on the Euler clock. For simulating the jump component, we follow exactly Huang & Tauchen (2005)’s approach by simulating
the exponentially distributed jump times and then the normally distributed jump sizes (from
2
N(0, σjmp
)).
Our parameter choices are given in Table 4 and are identical to the choices in Huang &
Tauchen (2005).
5.2.2 Simulation Results
Now we turn our attention to the simulation results based on the simulation design described in
Huang & Tauchen (2005) where jumps occur randomly and their arrival times are exponentially
distributed with parameter λ. We have simulated jumps at two different arrival rates. For the
low value of λ we have not observed more than two jumps at any day in our 5000–day sample
and for the high value of λ there were never more than three jumps a day, and in both cases
6 EMPIRICAL STUDY
Parameter
µ
β0
β1
β2
βv(2)
v
ρ , ρ1
ρ2
av , av(1)
av(2)
p
λ
21
Constant volatility Stochastic volatility
Two–factor
jump diffusion
jump diffusion
stochastic volatility model
0.03
0
-1.2
0.125
0.04
—
1.5
—
0.25
1
—
—
-0.62
- 0.3
—
- 0.3
—
-0.1
- 0.00137
—
-1.386
{0.1, 0.2, 0.5, 0.7}
—
{0.0114, 0.118}
—
Table 4: Choice of parameters for the constant volatility model with finite activity jumps,
for the one–factor stochastic volatility model with finite activity jumps and for the two–factor
stochastic volatility model.
there were many days where no jumps occurred at all. The simulation results are given in
the Tables 5 – 9 in the Appendix (Section 4.8.2). For both the constant and the one–factor
stochastic volatility model, the finite sample performance is really good. Also when we look
at low values for M, e.g. the case for M = 39 (which corresponds to computing the realised
variance and realised multipower variation based on 10–minute returns) the results are not too
bad and the log–linear test seems to perform particularly well already at that low frequency.
The performance of the linear and ratio test statistic occurs then to be a bit poorer when we
look at the two–factor stochastic volatility model. But again, the log–linear test leads to good
results. And clearly, when M increases, we see that the finite sample bias decreases and that
our estimates for the asymptotic variance get even better.
So given these results, which test statistic do we want to use when we study real data? When
we analyse data of relatively low frequency, it might be worth using the log–test and probably
quadpower rather than tripower variation. However, if we want to test for jumps in data of a
very high frequency, the performance of all three test statistics based on various powers in the
multipower variation gets quite similar.
6 Empirical Study
In our empirical study, we focus on equity high frequency quote data. Here we have chosen
IBM, General Motors (GM) and Shell (RDSA) intra–day TAQ data, available at WRDS, from
1 September 2005 to 31 August 2006. Figure 4 provides the plots of the three time series of
logarithmic daily prices and the corresponding daily realised variances.
Before analysing the data, we have cleaned the data. Following methods used by Hansen
& Lunde (2006b), we concentrate on quote data from one stock exchange only. Here we have
chosen the NYSE. We only consider quotes where both the bid–size and the ask–size are greater
(a)
(c)
(e)
50
100
150
200
Days
50
100
150
200
Days
50
100
150
Days
200
IBM realised variances
2
4
22
250
RDSA realised variances GM realised variances
1
2
3
20
40
RDSA log−prices
410
420
430
GM log−prices
300
325
350
IBM log−prices
430
440
450
6 EMPIRICAL STUDY
250
250
(b)
0
0
0
(d)
(f)
50
100
150
200
250
150
200
250
150
200
250
Days
50
100
Days
50
100
Days
Figure 4: (a)Log–prices for IBM for September 2005 to August 2006; (b) daily realised variances for IBM; (c) log–prices for GM for September 2005 to August 2006; (d) daily realised
variances for GM; (e) log–prices for RDSA for September 2005 to August 2006; (f) daily realised variances for RDSA.
than 0 and which are quoted in a normal trading environment (quote condition = 12 in the TAQ
database). Since data at the beginning and at the end of a trading day differ quite a lot from the
quotes during the day, we concentrate on data from 9.35 am until 15.55pm only. Furthermore,
we have focused on the bid–prices only. In order to construct a time series of five minute returns
of the log–bid–prices we use the previous tick sampling method. After we have cleaned the data,
we have a data set consisting of 249 business days with 76 five minute returns per day, hence
18,924 returns.
We have computed the difference of realised variance and realised quadpower for each day
in order to estimate the jump part of the quadratic variation. Besides we have calculated its
upper and lower 95% confidence bounds. The corresponding plot is given in Figure 5.
We observe the following. For the IBM data, we find that on 18 days (out of 249 business
days in the sample) the 0 is not in the confidence intervals, which indicates that there might
have been a jump or even several jumps on these days. This corresponds to 7.2% of the days.
For GM, we observe 41 days and, for RDSA, 36 days, where the 0 is not within the confidence
bounds, which corresponds to 16.4% and 14.4% of the days, respectively.
6 EMPIRICAL STUDY
IBM: RV − RMPV
−2
0
2
23
20
40
60
80
100
120 140
Days
160
180
200
220
240
0
20
40
60
80
100
120 140
Days
160
180
200
220
240
0
20
40
60
80
100
120 140
Days
160
180
200
220
240
RDSA: RV − RMPV
0.0
2.5
GM: RV − RMPV
0
25
50
75
0
Figure 5: Estimated jump part of the quadratic variation with confidence bounds for IBM, GM
and RDSA data.
7 CONCLUSION
24
7 Conclusion
In this paper, we have derived the joint distribution of realised variance and realised multipower
in the presence of jumps. Hence we were able to derive the asymptotic distribution of various
test statistics both under the null hypothesis of no jumps and under the alternative hypothesis.
In particular, we can now make inference on the jump part of quadratic variation using our key
result given in equation (13), which says that under some regularity assumptions and if Y and
σ have no common jumps then
1
√ (RVtn − RMP V (2; I)nt − [Y ]d∆n [t/∆n ] )
∆n
Z t
X
stably in law
−→
MN 0, θI
σu4 du + 2
0
p: TP ≤t
∆YTp
2 σT2p− + σT2p
!
,
where I ∈ N with 2 < I < 2s (2 − s) and s < 1.
Furthermore, we have shown how these (infeasible) limit results can be converted into feasible limit theorem, which can be used in practice.
We have carried out a detailed simulation study where we have chosen the parameters in
such a way that the resulting data look very much like data we might find in practice. In our simulation study we have compared the final sample performance of the linear, ratio and log–linear
test statistic for various multipower variations. We found that the finite sample performance is
good.
We have applied our theoretical results to some high frequency equity data and have been
able to identify days where the jump component of the quadratic variation seems to be significantly bigger than 0, which indicates that there might have been one or more jumps on these
days.
In future work, it will be interesting to study particularly two questions in more detail. First,
how do the results change when we allow for market microstructure noise in the model? How
robust are our test statistics and how does the asymptotic distribution change? Second, how
do these results extend to a multivariate framework? Very recent work by Barndorff-Nielsen &
Shephard (2007a) and Jacod & Todorov (2007) has already addressed the question of testing
for common and disjoint jumps of multivariate price processes. So it would be very interesting
to see whether it would be possible to extend the results from this paper to a multivariate model
setting.
A Proofs
Proof of Theorem 4.1 The univariate results follow from Jacod (2007, Theorem 2.11 (ii)) and
Jacod (2006, Theorem 6.2). In order to derive the multivariate central limit result, we use a
modified version of Jacod (2007, Theorem 2.12) which can account for multipower variation
rather than power variation only. For the proof of the theorem, we essentially have to prove
three lemmas (Lemma A.1 – Lemma A.3), which we will do in the following. But first of all,
let us state one further stronger assumption which can be relaxed afterwards.
A PROOFS
25
Hypothesis (SH) The hypothesis (H) holds and the processes (bt ), (ct ) and (Ft (φ2 )) are bounded by a non–random constant and the jumps of Y are also bounded by a constant.
We refer to Barndorff-Nielsen, Graversen, Jacod, Podolskij & Shephard (2006) and Jacod
(2007, Section 4 and 5) for more details about how the validity of the corresponding limit
results under stronger hypothesis leads to their validity under (H).
Remark Barndorff-Nielsen, Graversen, Jacod & Shephard (2006, Theorem 2 (in particular,
Example 7)) contains the following bivariate limit theorem for realised variance and realised
bipower variation for a Brownian semimartingale Y , i.e. in the absence of jumps, as n → ∞,
!
√ Rt 2
n
1
2 0 σu dW u
RVt − [Y ]t
stably in law
√ Rt 2
√ Rt
√
−→
n
fu ,
RMP
V
((1,
1))
−
[Y
]
2 0 σu dW u + θ2 0 σu2 dW
∆n
t
t
f.
stably in law for independent Brownian motions W and W
By using exactly the same reasoning, we can show that in the continuous semimartingale
framework we obtain, as n → ∞,
!
√ Rt 2
1
2 0 σu dW u
RVtn − [Y ]t
stably in law
√ Rt 2
√ Rt
√
−→
n
fu ,
2 0 σu dW u + θI 0 σu2 dW
∆n RMP V (2; I)t − [Y ]t
f.
stably in law for independent Brownian motions W and W
In order to prove our main theorem, we need some further notation, which we will introduce
in the following. Let Ytn denote a one–dimensional semimartingale. We consider functions
gj : R → Mdj ,dj+1 for j = 1, . . . , I, where Mdj ,dj+1 denotes a dj × dj+1 –dimensional matrix
with real–valued entries. Note that we are in particular interested in the following choice of
functions gj for j = 1, . . . , I and I ≥ 3. Let d1 = · · · = dI = 2, dI+1 = 1 and
2
y
0
1
0
1
g1 (y) =
, gi (y) =
, gI (y) =
, (14)
2/I
2/I
2/I
0 µ−1
0 µ−1
µ−1
2/I |y|
2/I |y|
2/I |y|
for i = 2, . . . I − 1. In the following, we will always set the second component
Q
∆n
(2)
Y
I
i+i′ −1
√
′
g
= 0 whenever i > [t/∆n ] − I + 1. Then,
′
i
i =1
∆n

1 
√
∆n
∆n
[t/∆n ] I
X Y
i=1
gi′
i′ =1
′
Further, we define βii =
∆ni+i′ −1 Y
√
∆n

 = √1
∆n
!
P[t/∆n ] n 2
(∆
Y
)
i=1
2/I
QI i n
P[t/∆n ]−I+1
µ−I
i′ =1 ∆i+i′ −1 Y
i=1
2/I
1
RVtn
.
=√
n
∆n RMP V (2; I)t
√ 1 σ(i−1)∆ ∆n ′ W
n
i+i −1
∆n
ρni (gi′ )
=
Z
for i′ = 1, . . . , I and
gi′ (x)fσ(i−1)∆n (x)dx,
A PROOFS
26
(componentwise for the diagonal matrices and vectors defined above), where fσ(i−1)∆n is the
2
)–distributed random variable. So, finally, we define the following
density of a N(0, σ(i−1)∆
n
random vector:
( I
)
I
′ Y
Xn ] Y
p [t/∆
n
n
U t = U (g1 , . . . , gI )t = ∆n
gi′ βii −
(15)
ρni (gi′ ) ,
i=1
i′ =1
i′ =1
Q
I
i′ =1
′
βii
(2)
which is in Md1 ,dI+1 . Note here that we will set the second component
gi′
−
Q
(2)
I
n
= 0 whenever i > [t/∆n ] − I + 1. The following Lemma states a central
i′ =1 ρi (gi′ )
limit theorem for the random vector (15).
Lemma A.1 Assume that (SH) holds and let g1 , . . . , gI denote continuous even functions of
at most polynomial growth with gi : Rd → Mdi ,di+1 for i = 1, . . . I as defined in (14). So, in
n
n
particular, we have d1 = 2 and dI+1 = 1. Let U = U (g1 , . . . gI ) denote the stochastic process
defined in (15) with components

!(j)
!(j) 
[t/∆n ] 
I
I

X
Y
Y
p
n
(j)
i′
n
U t (g1 , . . . , gI ) = ∆n
gi′ βi
−
ρi (gi′ )
,
 ′

′
i=1
i =1
i =1
n
for j = 1, 2. Then the d1 –dimensional process U converges stably in law to a limit process U
with components
2 Z t
X
(j)
k
Σj,k
Ut =
u dW u , j = 1, 2,
k=1
0
where the 2 × 2–dimensional process Σ, defined by
√ 2
2σu
0
√
√ 2
Σu =
2σu2
θI σu
(16)
is (Ft )–optional.
Proof Since we are only dealing with Brownian semimartingales in this lemma, the result follows directly along the lines of Barndorff-Nielsen, Graversen, Jacod, Podolskij & Shephard
(2006, Proposition 5.2) or by extending the proof of Jacod (2007, Lemma 5.7), which we will
sketch in the following.
One can easily show by induction on I that
!
!
j−1
I
I
I
I
′ Y
′ Y
X
Y
Y
gi′ βii −
gi′ βii
ρni (gi′ ) =
gj βij − ρni (gj )
ρni (gi′ ) ,
i′ =1
i′ =1
j=1
i′ =1
i′ =j+1
where an empty product is set to 1. This term is not measurable with respect to Fi∆n , which
we need in order to be able to apply Jacod & Shiryaev (2003, Theorem IX.7.19 and Theorem
IX.7.28). So we use the same methods which have been applied in the proof of BarndorffNielsen, Graversen, Jacod, Podolskij & Shephard (2006, Proposition 5.2). I.e. we shift the
A PROOFS
27
terms back in time to make them measurable w.r.t. Fi∆n . We do not shift the first term in the
sum, but we shift the second term by one, the third term by two etc. and, finally, the Ith term by
I − 1. By doing that we get a new random variable
!
!
j−1
I
I
′
Y
Y
p X
i
−(j−1)
ζin = ∆n
gi′ βi
gj βi1 − ρni−(j−1) (gj )
ρni−(j−1) (gi′ ) , (17)
j=1
i′ =1
i′ =j+1
which is clearly measurable with respect to Fi∆n . As in the proof of Barndorff-Nielsen, Graversen, Jacod, Podolskij & Shephard (2006, Proposition 5.2) one can easily show that
n
U t (g1 , . . . , gI )
[t/∆n ]−I+1
−
X
ucp
ζin −→ 0, as n → ∞.
i=I
Let Eni−1 (·) = E ·|F(i−1)∆n . Trivially, we get Eni−1 (ζin ) = 0 and Eni−1 (||ζin ||4) ≤ K∆2n (for a
constant K > 0). Analogously to the proof of Barndorff-Nielsen, Graversen, Jacod, Podolskij
& Shephard (2006, Proposition 5.2), we obtain in particular that
[t/∆n ]
X
Eni−1
i=1
[t/∆n ]
X
i=1
ζij;nζik;n
ucp
−→
Z
t
0
(Σu Σ∗u )j,k du,
ucp
Eni−1 ζij;n∆ni N −→ 0, if N = W or N ∈ N ,
(18)
(19)
as n → ∞, where N is the set of all bounded (Ft )–martingales which are orthogonal to W .
Now the result follows from Jacod & Shiryaev (2003, Theorem IX.7.19 and Theorem IX.7.28).
Now we study the more general case where we allow for jumps in the price process Y .
We start by introducing some notation (which is the same as in Jacod (2007)) and some more
assumptions.
Hypothesis (SK) Assumptions (K) and (SH) are satisfied and the functions γk = γ are bounded
and do not depend on k.
Let ǫ > 0 fixed. We define a process N by N = 1IE ∗ µ, where E = {x : γ(x) > ǫ}. Hence
N is a Poisson process with parameter the Lebesgue measure of E, say λ.
R
Remark Note thatR under (SK) we have R (1∧γ 2 (x))dx
R < ∞ and supx γ(x) ≤ K for a K ≥ 0.
2
Therefore Rwe get R γ (x)1I{γ 2 (x)≤1} (x)dx < ∞ and R 1I{γ 2 (x)>1} (x)dx < ∞. So altogether,
we obtain R γ 2 (x)dx < ∞, since
Z
Z
Z
2
2
γ (x)dx =
γ (x)1I{γ 2 (x)≤1} (x)dx + γ 2 (x)1I{γ 2 (x)>1} (x)dx
R
RZ
ZR
≤
γ 2 (x)1I{γ 2 (x)≤1} (x)dx + K 1I{γ 2 (x)>1} (x)dx < ∞.
R
R
A PROOFS
28
Therefore, we can deduce that λ is indeed finite:
Z
Z
γ(x)2
λ=
1I{x:γ(x)>ǫ} (x)dx ≤
dx < ∞.
ǫ2
R
R
Depending on ǫ, we define the following quantities:
• S1 , S2 , . . . are the successive jump times of N,
• I(n, p) = i, S− (n, p) = (i − 1)∆n , S+ (n, p) = i∆n on {(i − 1)∆n < Sp ≤ i∆n },
1
√1
• α− (n, p) = √∆
,
−
W
W
W
−
W
,
α
(n,
p)
=
S
(n,p)
S
S
(n,p)
S
+
p
p
−
+
∆n
n
• Rp = ∆YSp ,
• Y (ǫ)t = Yt −
′
P
p:Sp ≤t
Rp ,
• Rpn = ∆ni Y (ǫ) on the set {(i − 1)∆n < Sp ≤ i∆n },
p
p
′
′
• Rp = ξp Up σSp− + 1 − ξp Up σSp ,
• Ωn (T, ǫ) = {ω : each interval [0, T ] ∩ ((i − 1)∆n , i∆n ] contains at most one Sp (ω);
|∆ni Y (ǫ)(ω)| ≤ 2ǫ, ∀i ≤ T /∆n }.
n
Lemma
A.2
Under
(SK),
the
sequences
U
,
(α
(n,
p),
α
(n,
p))
converge stably in law
−
+
p≥1
p
p
′
to U ,
ξp Up , 1 − ξp Up p≥1 as n → ∞.
Proof Most parts of this proof are identical to the corresponding proof by Jacod (2007, p. 31–
32). However, in Step 2, we have adjusted the proof to allow for multipower variation.
Step 1: We have to prove that for all bounded A–measurable random variables Ψ and all
bounded Lipschitz functions Φ on the Skorohod space of d–dimensional functions on
R+ endowed with a distance for the Skorohod
and all q ≥ 1 and all continuous
Qtopology,
q
2
bounded functions fp on R , and with An = p=1 fp (α− (n, p), α+ (n, p)) then
E ΨΦ Ū
n
An
q
p
p
Y
′
→ Ẽ ΨΦ(Ū )
Ẽ fp ( ξp Up , 1 − ξp Up ) ,
p=1
as n → ∞. (20)
Replacing Ψ by E ( Ψ| G) on both sides, it is sufficient to prove the limit result (20) for
a Ψ which is measurable with respect to the separable σ–field G generated by both the
measure µ and the processes b, σ, W and Y .
Step 2: Let µ′ and µ′′ ( ν ′ and ν ′′ , respectively) denote the restrictions of µ (and ν, respectively)
to R+ × E c and to R+ × E. Further, let (F ′) denote the smallest filtration containing (Ft )
′
such that µ′′ is F0 -measurable. Clearly W is a Wiener process and µ′ is a Poisson random
′
measure with compensator ν ′ relative to (Ft ), but also relative to (Ft ).
Now we define a set of intervals surrounding the jump times of the Poisson process N. Let
m ∈ N be any positive integer, then we define Spm− = (Sp −1/m)+ , Spm+ = Sp +1/m and
A PROOFS
29
′
Bm = ∪p≥1 (Spm− , Spm+ ]. Since the indicator function 1IBm (ω, t) is F0 ⊗ R+ –measurable,
Rt
′
we can define the stochastic integral W (m)t = 0 1IBm (u)dWu . Now let (Ft m ) denote
′
′
the smallest filtration containing (Ft ) such that W (m) is (F0m )–measurable. Further, we
define the set Γn (m, t) = {i ∈ N : i ≤ [t/∆n ] and Bm ∩((i−1)∆n , i∆n ] = ∅}. Similarly
′n
to Jacod (2007), we define two bivariate processes U (m), where we just sum over the
integers which are not “close” to the jump times, and U (m), with components:

!j
!j 
I
I
X
Y
Y
p
′n
′

U (m)jt = ∆n
gi′ βii
−
ρni (gi′ )  ,
i∈Γn (m,t)
U (m)jt
=
2 Z
X
j ′ =1
t
0
′
i′ =1
i′ =1
j′
c (u) dW ,
Σj,j
u
u 1IBm
where Σ is defined by (16) and j = 1, 2. Once again, note that both integrals are well–
defined since W is a Brownian motion w.r.t. the smallest filtration containing (Ft′ ) and
ucp
′
F0m at time 0. Clearly, Bm → ∪p {Sp } for m → ∞ and, hence, U (m) −→ U as m → ∞.
Note that
Γn (m, t)c = {i : i ≤ [t/∆n ], Bm ∩ ((i − 1)∆n , i∆n ] 6= ∅}
2
.
⊆ i : i ≤ [t/∆n ], ∃p : |i∆n − Sp | ≤
m
Note that in the following, the constant K can change from line to line, but will not
depend on n, t and m (but will depend on ǫ).
Since the conditional expectation of ζij;n is zero, if we condition on the past before
(i − 1)∆n and the
sequence of stopping times Sp , which are independent of W, i.e.
′
= 0, we reach that Ū n (g1 , . . . , gI )j − Ū n (m)j is indeed a martinE ζ j;n F ′
i
s
(i−1)∆n
s
′m
gale with respect to (Ft ). By applying Doob’s inequality, we obtain the following:


∞ [t/∆
n]
2 X
X
′
|ζij;n|2 1I{|i∆n −Sp |≤2/m } .
E sup Ū n (g1 , . . . , gI )js − Ū n (m)js ≤ 4∆n E 
s≤t
p=1
i=1
Since all functions gi (for i = 1, . . . I) are of at most polynomial growth, there exist
constants p̃1 , . . . , p̃I such that (by induction on I)
2 ′n
n
j
j
E sup Ū (g1 , . . . , gI )s − Ū (m)s s≤t


∞
I
X
X
Y
′
≤ K∆n E 
(1 + |βii ,n |p̃i′ ) .
p=1 1≤i≤[t/∆n ]:∃p:|i∆n −Sp |≤2/m i′ =1
Since σ is bounded and for fixed p, we get
2
4
# i : i ≤ [t/∆n ], |i∆n − Sp | ≤
≤
,
m
m∆n
A PROOFS
30
we obtain from (SH) that
!
∞
2 K
X
′
E sup Ū n (g1 , . . . , gI )js − Ū n (m)js ≤ E
1I{Sp ≤t+1}
m
s≤t
p=1
=
=
∞
∞
KX
KX
P (Sp ≤ t + 1) =
P (Nt+1 ≥ p)
m p=1
m p=1
K
λ(t + 1).
m
It is now sufficient to prove that for each m and for each G–measurable and bounded Ψ
for fixed m, as n → ∞:
q
′
p
p
Y
′
n
(21)
E ΨΦ Ū (m) An → Ẽ ΨΦ(Ū (m)
Ẽ fp ( ξp Up , 1 − ξp Up ) ,
p=1
since Ψ is Lipschitz and bounded.
Step 3: This part of the proof follows directly from Jacod (2007, page 32). Now we fix m and
define a regular version Q = Qω (·) of the probability measure P on (Ω, G), conditional
′
on F0m , and similarly Q̃ = Q × P′ .
′
When i ∈ Γn (m, t), then ∆ni W is independent of F0m and hence also standard normally
distributed under each Qω . So we can state (18) and (19), where we replace Eni−1 by
′
c
m
. Since Bm
is a locally finite unit of intervals, we get
EQω (·| F(i−1)∆
n
X
i∈Γn (m,t)
EQω
Z t
′
j;n j ′;n ′ m
c (u)du,
(Σu Σ∗u )j,j 1IBm
ζi ζi F(i−1)∆n →
0
for j, j ′ = 1, 2 as n → ∞. Therefore, we obtain for n → ∞:
′n
EQω ΨΦ U (m) → ẼQ̃ω ΨΦ U(m) ,
′n
so U (m)
stably in law
−→
(22)
U (m) under the measure Qω .
′
Step 4: Clearly An is F0m –measurable. Therefore, we can express the left hand side of (21)
′n
′n
E ΨΦ U (m) An = E An EQ· ΨΦ U (m)
′n
= Ẽ An ẼQ̃· ΨΦ U (m)
+ Ẽ An EQ· ΨΦ U (m)
− ẼQ̃· ΨΦ U (m)
Note that all quantities in the formula above are bounded, so by (22)
the second summand
′
on the right hand side converges to 0 and Ψ = ẼQ̃· ΨΦ(U (m) is also a bounded and
′
F0m –measurable variable. So, proving (21) is equivalent to proving
′
′
Ẽ (Ψ An ) → Ẽ(Ψ )
q
Y
p=1
p
p
′
ξp Up , 1 − ξp Up ,
Ẽ fp
A PROOFS
31
′
for all Ψ′ bounded and F0m –measurable, which is implied by
p
′
stably in law p
(α− (n, p), α+ (n, p))p≥1
−→
ξp Up , 1 − ξp Up
p≥1
, as n → ∞.
However, this convergence result follows directly from Jacod & Protter (1998, Lemma
6.2).
So the result follows.
Finally, we generalise the results from Lemma A.2 and obtain the final auxiliary limit result
which we need for the proof of our main theorem.
n
′n √
Lemma A.3 Under the assumptions of Lemma A.2, the sequences U , Rp / ∆n p≥1 con
′ verge stably in law to U, Rp p≥1 , as n → ∞.
Proof This proof goes along the lines of the proof of Jacod (2007, Lemma 5.9). However, for
completeness we sketch Jacod (2007)’s proof here.
′
Since σ is càdlàg and by construction of Rp , we can deduce from Lemma A.2 that it is
sufficient to prove
p
′
P
(23)
wpn = Rpn / ∆n − σS− (n,p) α− (n, p) − σSp α+ (n, p) → 0.
′
for any p ≥ 1. Let µ′ and Ft be defined as in the previous proof. From
Z t
Z t
′
Yt = Y0 +
bs ds +
σs dWs + δ ⋆ (µ − ν)t ,
′
where bt = bt +
′
R
b′t
0
0
′
κ (δ(t, x)), we obtain:
Z t
Z t
′
Yt (ǫ) = Y0 +
b (ǫ)s ds +
σs dWs + δ ⋆ (µ′ − ν ′ )t ,
R
0
0
where b (ǫ)t = + E δ(t, x)dx and the stochastic integral can be taken relative to both (Ft )
′
and (F ′ t ). Further, Y (ǫ) satisfies (SH) for the filtration (Ft ). Based on the definition Y ′ =
Y − Y c − Y0 , we have Y ′ (ǫ) = Y (ǫ) − Y c − Y0 . Then
!
Z S+ (n,p)
Z Sp
1
wpn = √
(σu − σSp )dWs .
(σu − σS− (n,p) )dWs +
∆nI(n,p) Y ′ (ǫ) +
∆n
S− (n,p)
Sp
Then it can be shown that for ǫ = ∆1/4 :
!
Z S+ (n,p) Z
p
1
du
δ(u, x)2 dx
E (wpn )2 ≤ K ∆n + KE
1/4
∆n S− (n,p)
c
E ∩{x:|δ(u,x)|≤∆n }
!
Z Sp
Z S+ (n,p)
1
1
+ KE
(σu − σSp )2 du.
(σu − σS− (n,p) )2 du +
∆n S− (n,p)
∆n Sp
R
Finally, since |δ| ≤ γ and γ(x)2 dx < ∞ and since σ is càdlàg and bounded, the expression
P
above converges to 0 as n → ∞ (by Lebesgue’s theorem). So wpn → 0.
B TABLES
32
Now we can combine the results from the three Lemmas above to deduce the result of Theorem 4.1 analogously to the proof of Jacod (2007, Theorem 2.12). I.e. note that Lemma A.1
is multidimensional. The one–dimensional results have been deduced from the corresponding
components of Lemma A.1 by Jacod (2007, Theorem 2.11 (ii) ) for the realised variance and for
the realised multipower variation by Barndorff-Nielsen, Graversen, Jacod, Podolskij & Shephard (2006, p. 10–11) in the absence of jumps and Jacod (2006, Theorem 6.2) in the presence
of jumps. So the way how these results are deduced from Lemma A.1 can be carried over separately for each component in the multidimensional case. So Theorem 4.1 holds.
B Tables
Linear test
Ratio test
Log-linear test
M
(K)
I
Mean
S.D.
Cove.
Mean
S.D.
Cove.
Mean
S.D.
Cove.
39
(7)
3
4
10
3
4
10
3
4
10
3
4
10
3
4
10
0
0.02
0.07
-0.02
0
0.02
0
0.01
0.03
0
0.01
0.01
0.02
0.02
0.02
0.92
0.95
1.11
0.9
0.93
1.02
0.91
0.93
0.97
0.95
0.97
0.99
0.97
0.98
0.99
0.975
0.967
0.924
0.977
0.971
0.946
0.97
0.967
0.955
0.961
0.958
0.952
0.955
0.953
0.947
0
-0.02
-0.07
0.02
0
-0.02
0
-0.01
-0.03
0
-0.01
-0.01
-0.02
-0.02
-0.02
0.92
0.95
1.11
0.9
0.93
1.02
0.91
0.93
0.97
0.95
0.97
0.99
0.97
0.98
0.99
0.975
0.967
0.924
0.977
0.971
0.946
0.97
0.967
0.955
0.961
0.958
0.951
0.955
0.953
0.947
-0.02
0
-0.01
-0.04
-0.03
-0.03
-0.01
0
0
0
0
0
0.01
0.02
0.02
0.92
0.95
1.05
0.91
0.94
1.01
0.92
0.93
0.97
0.96
0.97
0.99
0.97
0.98
0.99
0.975
0.966
0.939
0.973
0.967
0.951
0.967
0.966
0.958
0.959
0.957
0.951
0.955
0.953
0.947
78
(9)
390
(20)
1560
(40)
23400
(153)
Table 5: Constant volatility model with λ = 0.014, σp = 1.5. We simulate data for 5000 days
and compute the mean, standard deviation and coverage of the feasible linear test statistic, the
feasible ratio test statistic and the feasible log-linear test statistic.
B TABLES
33
Linear test
Ratio test
Log-linear test
M
(K)
I
Mean
S.D.
Cove.
Mean
S.D.
Cove.
Mean
S.D.
Cove.
39
(7)
3
4
10
3
4
10
3
4
10
3
4
10
3
4
10
-0.06
-0.02
0.03
-0.05
-0.01
0.03
-0.06
-0.03
0
-0.06
-0.04
-0.02
-0.01
-0.01
0
0.96
0.98
1.1
0.95
0.95
1.01
0.95
0.95
0.97
0.97
0.97
0.98
0.98
0.98
0.99
0.965
0.959
0.925
0.971
0.969
0.948
0.961
0.961
0.951
0.96
0.957
0.955
0.953
0.956
0.952
0.06
0.02
-0.03
0.05
0.01
-0.03
0.06
0.03
0
0.06
0.04
0.02
0.01
0.01
0
0.96
0.98
1.1
0.95
0.95
1.01
0.95
0.95
0.97
0.97
0.97
0.98
0.98
0.98
0.99
0.965
0.959
0.925
0.971
0.969
0.948
0.961
0.961
0.951
0.96
0.957
0.955
0.953
0.956
0.952
-0.1
-0.06
-0.05
-0.09
-0.05
-0.02
-0.1
-0.06
-0.03
-0.09
-0.06
-0.04
-0.04
-0.02
-0.01
0.98
0.98
1.06
0.95
0.94
0.99
0.96
0.95
0.97
0.99
0.98
0.98
0.98
0.98
0.99
0.961
0.962
0.94
0.965
0.97
0.955
0.953
0.958
0.953
0.953
0.953
0.957
0.953
0.957
0.953
78
(9)
390
(20)
1560
(40)
23400
(153)
Table 6: Constant volatility model with λ = 0.118, σp = 1.5. We simulate data for 5000 days
and compute the mean, standard deviation and coverage of the feasible linear test statistic, the
feasible ratio test statistic and the feasible log-linear test statistic.
Linear test
Ratio test
Log-linear test
M
(K)
I
Mean
S.D.
Cove.
Mean
S.D.
Cove.
Mean
S.D.
Cove.
39
(7)
3
4
10
3
4
10
3
4
10
3
4
10
3
4
10
0.01
0.04
0.07
-0.01
0
0.03
-0.02
-0.01
0
0
0
0
0
0
0
0.93
0.96
1.09
0.88
0.92
1.01
0.92
0.94
0.97
0.96
0.97
1
0.96
0.97
0.99
0.973
0.965
0.925
0.978
0.972
0.949
0.972
0.964
0.955
0.959
0.957
0.947
0.958
0.955
0.95
-0.01
-0.04
-0.07
0.01
0
-0.03
0.02
0.01
0
0
0
0
0
0
0
0.93
0.96
1.09
0.88
0.92
1.01
0.92
0.94
0.97
0.96
0.97
1
0.96
0.97
0.99
0.973
0.965
0.925
0.978
0.972
0.949
0.972
0.964
0.955
0.959
0.957
0.947
0.958
0.955
0.95
-0.01
0
0
-0.03
-0.02
-0.02
-0.04
-0.03
-0.02
-0.02
-0.02
-0.02
0
0
0
0.93
0.96
1.04
0.89
0.92
0.99
0.92
0.94
0.97
0.96
0.97
1
0.97
0.97
0.99
0.972
0.966
0.942
0.976
0.97
0.949
0.967
0.964
0.957
0.958
0.957
0.946
0.957
0.955
0.949
78
(9)
390
(20)
1560
(40)
23400
(153)
Table 7: Stochastic volatility model with λ = 0.014, σp = 1.5. We simulate data for 5000 days
and compute the mean, standard deviation and coverage of the feasible linear test statistic, the
feasible ratio test statistic and the feasible log-linear test statistic.
B TABLES
34
Linear test
Ratio test
Log-linear test
M
(K)
I
Mean
S.D.
Cove.
Mean
S.D.
Cove.
Mean
S.D.
Cove.
39
(7)
3
4
10
3
4
10
3
4
10
3
4
10
3
4
10
-0.09
-0.05
0
-0.08
-0.05
0
-0.05
-0.02
0.01
-0.05
-0.02
-0.01
-0.02
0
0
0.96
0.98
1.11
0.93
0.95
1.02
0.96
0.96
0.98
0.96
0.96
0.98
0.99
0.99
0.98
0.966
0.966
0.927
0.969
0.966
0.945
0.962
0.963
0.958
0.957
0.958
0.952
0.953
0.953
0.954
0.09
0.05
0
0.08
0.05
0
0.05
0.02
-0.01
0.05
0.02
0.01
0.02
0
0
0.96
0.98
1.11
0.93
0.95
1.02
0.96
0.96
0.98
0.96
0.96
0.98
0.99
0.99
0.98
0.966
0.966
0.927
0.969
0.966
0.945
0.962
0.963
0.958
0.957
0.958
0.952
0.953
0.953
0.954
-0.14
-0.1
-0.08
-0.12
-0.09
-0.07
-0.1
-0.05
-0.02
-0.08
-0.04
-0.02
-0.04
-0.01
0
0.99
0.99
1.06
0.96
0.96
1
0.98
0.96
0.96
0.98
0.96
0.98
1
0.99
0.98
0.957
0.961
0.938
0.962
0.963
0.946
0.955
0.959
0.959
0.952
0.957
0.955
0.949
0.954
0.954
78
(9)
390
(20)
1560
(40)
23400
(153)
Table 8: Stochastic volatility model with λ = 0.118, σp = 1.5. We simulate data for 5000 days
and compute the mean, standard deviation and coverage of the feasible linear test statistic, the
feasible ratio test statistic and the feasible log-linear test statistic.
Linear test
Ratio test
Log-linear test
M
(K)
I
Mean
S.D.
Cove.
Mean
S.D.
Cove.
Mean
S.D.
Cove.
39
(7)
3
4
10
3
4
10
3
4
10
3
4
10
3
4
10
0.26
0.35
0.63
0.16
0.23
0.47
0.06
0.09
0.23
0.04
0.06
0.12
0.02
0.03
0.05
1.05
1.09
1.18
0.96
0.99
1.06
0.91
0.94
0.98
0.94
0.95
0.99
0.98
0.99
0.99
0.943
0.933
0.878
0.963
0.955
0.912
0.975
0.969
0.948
0.968
0.961
0.95
0.955
0.953
0.948
-0.26
-0.35
-0.63
-0.16
-0.23
-0.47
-0.06
-0.09
-0.23
-0.04
-0.06
-0.12
-0.02
-0.03
-0.05
1.05
1.09
1.18
0.96
0.99
1.06
0.91
0.94
0.98
0.94
0.95
0.99
0.98
0.99
0.99
0.943
0.933
0.878
0.963
0.955
0.912
0.975
0.969
0.948
0.968
0.961
0.95
0.955
0.953
0.948
0.21
0.28
0.43
0.13
0.19
0.36
0.04
0.07
0.18
0.03
0.04
0.1
0.02
0.03
0.04
0.98
0.99
0.97
0.94
0.96
0.95
0.92
0.94
0.96
0.94
0.95
0.99
0.98
0.99
0.99
0.95
0.945
0.947
0.966
0.962
0.956
0.975
0.97
0.961
0.967
0.961
0.953
0.954
0.952
0.949
78
(9)
390
(20)
1560
(40)
23400
(153)
Table 9: Two–factor stochastic volatility model. We simulate data for 5000 days and compute
the mean, standard deviation and coverage of the feasible linear test statistic, the feasible ratio
test statistic and the feasible log-linear test statistic.
REFERENCES
35
References
Aı̈t-Sahalia, Y. & Jacod, J. (2006), Testing for jumps in a discretely observed process. Unpublished paper: Department of Economics, Princeton University.
Andersen, T. G., Bollerslev, T., Diebold, F. X. & Ebens, H. (2001), ‘The distribution of realized
stock return volatility’, Journal of Financial Economics 61, 43–76.
Andersen, T. G., Bollerslev, T., Diebold, F. X. & Labys, P. (2001), ‘The distribution of exchange rate volatility’, Journal of the American Statistical Association 96, 42–55. Correction
published in 2003, Volume 98, Page 501.
Barndorff-Nielsen, O. E., Graversen, S. E., Jacod, J., Podolskij, M. & Shephard, N. (2006), A
central limit theorem for realised power and bipower variations of continuous semimartingales, in Y. Kabanov & R. Lipster, eds, ‘From Stochastic Analysis to Mathematical Finance,
Festschrift for Albert Shiryaev’, Springer.
Barndorff-Nielsen, O. E., Graversen, S. E., Jacod, J. & Shephard, N. (2006), ‘Limit theorems
for bipower variation in financial econometrics’, Econometric Theory 22, 677–719.
Barndorff-Nielsen, O. E. & Shephard, N. (2002), ‘Econometric analysis of realised volatility
and its use in estimating stochastic volatility models’, Journal of the Royal Statistical Society
B 64, 253–280.
Barndorff-Nielsen, O. E. & Shephard, N. (2004), ‘Power and bipower variation with stochastic
volatility and jumps (with discussion)’, Journal of Financial Econometrics 2, 1–48.
Barndorff-Nielsen, O. E. & Shephard, N. (2006), ‘Econometrics of testing for jumps in financial
economics using bipower variation.’, Journal or Financial Econometrics 4, 1–30.
Barndorff-Nielsen, O. E. & Shephard, N. (2007a), Measuring the impact of jumps in multivariate price processes using bipower covariation. Unpublished paper: Nuffield College, Oxford.
Barndorff-Nielsen, O. E. & Shephard, N. (2007b), Variation, jumps, market frictions and high
frequency data in financial econometrics, in R. Blundell, P. Torsten & W. K. Newey, eds, ‘Advances in Economics and Econometrics. Theory and Applications, Ninth World Congress’,
Econometric Society Monographs, Cambridge University Press.
Barndorff-Nielsen, O. E., Shephard, N. & Winkel, M. (2006), ‘Limit theorems for multipower
variation in the presence of jumps’, Stochastic Processes and Their Applications 116, 796–
806.
Comte, F. & Renault, E. (1998), ‘Long memory in continuous–time stochastic volatility models’, Mathematical Finance 8, 291–257.
Hansen, P. R. & Lunde, A. (2006a), ‘Realized variance and market microstructure noise’, Journal of Business and Economic Statistics 24, 127–218.
Hansen, P. R. & Lunde, A. (2006b), ‘Rejoinder (to comments on realized variance and market
microstructure noise)’, Journal of Business and Economic Statistics .
REFERENCES
36
Huang, X. & Tauchen, G. (2005), ‘The Relative Contribution of Jumps to Total Price Variance’,
Journal of Financial Econometrics 3(4), 456–499.
Jacod, J. (1994), Limit of random measures associated with the increments of a Brownian semimartingale. Preprint number 120, Laboratoire de Probabilitiés, Université Pierre et Marie
Curie, Paris.
Jacod, J. (2006), Asymptotic properties of realized power variations and related functionals of
semimartingales: Multipower variation. Université P. et M. Curie (Paris 6): Unpublished
paper.
Jacod, J. (2007), ‘Asymptotic properties of realized power variations and related functionals of
semimartingales’, Stochastic Processes and their Applications . Forthcoming.
Jacod, J. & Protter, P. (1998), ‘Asymptotic error distribution for the Euler method for stochastic
differential equations’, The Annals of Probability 26(1), 267 – 307.
Jacod, J. & Shiryaev, A. N. (2003), Limit Theorems for Stochastic Processes, second edn,
Springer, Berlin.
Jacod, J. & Todorov, V. (2007), Testing for common arrivals of jumps for discretely observed
multidimensional processes. Unpublished working paper.
Lee, S. S. & Mykland, P. A. (2006), Jumps in financial markets: A new nonparametric test and
jump dynamics. Technical report 566, Dept of Statistics, The Univ. of Chicago.
Protter, P. E. (2004), Stochastic Integration and Differential Equations, second edn, Springer,
London.
Veraart, A. E. D. (2007), Feasible inference for realised variance in the presence of jumps.
Oxford Financial Research Centre: Working Paper Series in Financial Economics: 2007-FE02.
Woerner, J. H. C. (2006), Power and multipower variation: inference for high frequency data,
in A. N. Shiryaev, M. R. Grossinõ, P. Oliviera & M. Esquivel, eds, ‘Stochastic Finance’,
Springer, pp. 343–364.