THE INTEGRATED SQUARE-ROOT PROCESS
Daniel Dufresne, University of Montreal
Abstract
This paper studies some of the properties of the square-root process, particularly
those of the integral of the process over time. After summarizing the properties
of the square-root process, the Laplace transform of the integral of the square-root
process is derived. Three methods for the computation of the moments of this integral
are given, as well as some properties of the density of the integral. The relationship
between the Laplace transforms of a variable and of its reciprocal is studied. An
application to the generalized inverse Gaussian distribution is given.
Keywords: Square-Root Process; Stochastic Volatility; Volatility
Swaps; Generalized Inverse Gaussian Distribution
Journal of Economic Literature Classification: C63
1991 Mathematics Subject Classification: 60G99
1. Introduction
The square-root process is the unique strong solution of the stochastic differential
equation (SDE in the sequel)
√
dX = (αX + β)dt + γ X dW,
X0 = x̄ ≥ 0,
(1.1)
where α ∈ R, β > 0, γ > 0, x̄ ≥ 0, and W is standard Brownian motion. Other names used
for the same process are squared Gaussian (since the square of some Gaussian processes
can be shown to satisfy (1.1) — a comment attributed to L.C.G. Rogers), Feller (because
of the early work of William Feller on the process), CIR (since Cox, Ingersoll & Ross
(1985) used (1.1) as a model for the spot interest rate, and gave the formulas for risk-free
bonds under this assumption), and Heston model (since Heston (1993) studied the pricing
of options when the squared volatility of the underlying security satisfies (1.1)).
The integral of X arises in both volatility and interest rate models. If the stock price
follows the process
√
dS = µS dt + XS dV,
This version: 23-7-2002
The integrated square-root process
where (V, W ) is two-dimensional Brownian motion (possibly correlated), and X is the
solution of (1.1), then the integral
Yt =
t
Xs ds
0
is the cumulative variance of the stock returns up to time t. This integrated variance process
occurs in volatility swaps (Carr, 2001), and also in other derivatives (see below). The reason
for this research is the quest for explicit formulas for options with payoffs involving Yt .
The results in this paper have already been used by the author for this purpose; this will
be the subject of a subsequent paper.
The process Yt obviously arises in bond pricing, when spot interest rates follow the
CIR model. This is because the price of a risk-free zero-coupon bond maturing at time t
equals
EQ e−Yt
where Q is a risk-neutral measure. Another example would be options based on the average
spot interest rate, for instance a call on the average of spot rates Xt over [0, T ], with strike
R0 :
T
T
1
1
−
X dt
Xt dt − R0
= e−Yt (YT − T R0 )+
e 0 t
T 0
T
+
The price of such an option would be the expectation of this discounted payoff with respect
to the risk neutral measure, which requires knowledge of the distribution of YT .
The moments and distribution of Yt may also be of interest in the statistical estimation
of the parameters α, β, γ in (1.1).
Here is a summary of the paper. Section 2 shows that the moments of Xt are solutions of a sequence of ordinary differential equations, and that these moments allow the
determination of the moment generating function and of the density of the process. The
moments of Xt are required in the sequel, and the technique used to obtain them is used
again in Section 3. Moreover, Section 2 yields an improvement of a result due to Dufresne
(1989), regarding the moments of a process related to Asian options and the Courtadon
interest rate model. The density of Xt in (1.1) is of course well known, but the derivation
given in Section 2 is not common (the author has not seen it in the literature), and clearly
shows that the distribution of the process is the convolution of two distributions, one compound Poisson/Exponential and the other gamma. (The distribution of Xt has been called
“non-central chi-square,” apparently because it reduces to the distribution of the square
of non-zero normal variable when α = 0.) The better known way of finding the moment
generating function and density of Xt is via the relationship with Bessel processes, and
this is also indicated in Section 2.
Section 3 shows how the moments of the integral of the square-root process may
be obtained, either by solving linear differential equations, or more directly by a recursive
procedure; the results are apparently new. Section 4 is a derivation of the Laplace transform
2
23-7-2002
The integrated square-root process
of the integral of the square-root process, based on the formula for the price of a zerocoupon bond in the CIR framework (the latter was based on results previously known in
probability theory). The Laplace transform yields a third way of calculating the moments.
Section 5 extends a result which is used in Section 4, and is of independent interest in
Probability Theory. Formulas are given which relate the Laplace transform of a random
variable U > 0 to the Laplace transform of 1/U . They extend one well-known formula
for Laplace transforms. An application to the generalized inverse Gaussian distribution is
given.
2. A sequence of ODEs; the moments and distribution of Xt
When the square-root process (1.1) is used as a financial model, it is usually made
“mean-reverting” by letting α < 0. In the sequel no such restriction is imposed, except
that Theorem 2.3 assumes α = 0.
The moments of Xt are easy to obtain recursively, by applying Ito’s formula and then
taking epectations. A general formula for the moments of Xt will now be derived. The
technique is similar to the one used in Dufresne (1989, 1990) for the integral of geometric
Brownian motion, and yields an extension of one result in Dufresne (1989) (Corollary 2.2
below). Next, the Laplace transform of the density of Xt is derived, based on the moments
only. This derivation is different form the usual ones, which are either based on PDEs, or
on the relationship with Bessel processes. Our derivation is based on the assumption that
all the moments of X are finite. The moments are calculated and then summed to get the
moment generating function (MGF). Finally, the density is exhibited, and is shown to be
the convolution of a Gamma and a compound Poisson/exponential.
(N.B. The MGF of a probability measure m is the mapping
r →
erx m(dx).
The concept is thus the same as the Laplace transform, but a + instead of a − in front of
the argument.)
Applying Ito’s formula with f (x) = xk to (1.1), we get
1
dX k = (ak X k + bk X k−1 )dt + γkX k− 2 dWt
with
ak = αk,
(2.1)
1
bk = βk + γ 2 k(k − 1).
2
Eq.(2.1) is the same as
Xtk
k
t
(ak Xsk
= x̄ +
+
0
bk Xsk−1 )ds
+
t
k− 12
γkXs
dWs .
0
Here we let the initial value of the process be x̄ ≥ 0, and we take as given that E X n < ∞
for all n ≥ 0. The consequence is that the Ito integral above has expected value 0. Defining
3
23-7-2002
The integrated square-root process
mk (t) = E Xtk , k = 0, 1, . . ., we can then differentiate with respect to t on each side to
obtain
mk (t)
= ak mk (t) + bk mk−1 (t),
k ≥ 1, t > 0
(2.2)
mk (0)
= x̄k ,
k ≥ 0.
(2.3)
Eq. (2.2) is of the same type as Eq.(18) in Dufresne (1989), and can be solved in the same
way. The main difference here is that the initial value of mk (t) is not 0.
Theorem 2.1. Let a0 = 0 and suppose the numbers {a0 , . . . , aK } are distinct. Then the
solution of (2.2), subject to (2.3) and m0 (t) ≡ 1 is
k
mk (t) =
dkj eaj t ,
k = 0, . . . , K,
(2.4)
j=0
where
dkj =
j
i
x̄
i=0
k
bm
m=i+1
k
=i
=j
1
,
aj − a
j = 0, . . . , k.
(2.5)
Proof. Define a product over an empty set of indices as 1. Then formulas (2.4) and (2.5)
are correct for k = 0. For k = 1, (2.2) implies
a1 t
m1 (t) = m1 (0)e
t
a1 t
+ b1 e
−a1 s
e
a1 t
ds = x̄e
0
ea1 t − 1
b1
+ b1
= − +
a1
a1
b1
+ x̄ ea1 t .
a1
For j = 0 and j = 1, mj (t) is a combination of ea t , ≤ j. If this claim is correct for
j = 0, . . . , k − 1, then mk (t) is the solution of an inhomogeneous ordinary differential
equation with a forcing term equal to a combination of exponentials:
mk (t)
− ak mk (t) =
k−1
Cj eaj t
j=0
=⇒
ak t
mk (t) = mk (0)e
ak t
+e
t
−ak s
e
0
k−1
Cj eaj s ds.
(2.6)
j=0
Since the {a0 , . . . , aK } are distinct, mk (t) has to be of the form
dk0 + dk1 ea1 t + · · · + dkk eak t .
By induction, this is true for k = 0, . . . , K. As a function of x̄, mk (t) is a polynomial
of degree k, and the coefficient of x̄k in mk (t) is eak t (because x̄k is the initial condition
mk (0)).
4
23-7-2002
The integrated square-root process
Insert (2.4) into (2.2) to obtain
k
aj t
aj dkj e
k
=
j=0
aj t
ak dkj e
+
j=0
k−1
bk dk−1,j eaj t ,
j=0
which implies
dkj =
bk
bk
bj+1
dk−1,j =
···
djj ,
aj − ak
aj − ak
aj − aj+1
0 ≤ j < k.
(2.7)
Since mj (t) is a polynomial of degree j in x̄, the last equation implies that dkj is also a
polynomial of degree j in x̄, which we write as
dkj =
j
dkji x̄i .
i=0
By identifying the coefficients of x̄i in (2.7), we get
dkji =
bk
bj+1
···
djji ,
aj − ak
aj − aj+1
0 ≤ i ≤ j < k.
We also know that dkkk = 1, since the leading coefficient of mk (t), as a polynomial in x̄,
is eak t (see (2.6)). The missing constants are thus dkki , i = 0, . . . , k − 1, k ≥ 1. All this
shows that the problem is the same for each power of x̄: for any i = 0, 1, . . ., we have
diii = 1
dkji =
k
(2.8a)
bk
dk−1,j,i ,
aj − ak
i≤j<k
0 ≤ i < k.
dkji = 0,
(2.8b)
(2.8c)
j=i
(The last equality comes from the initial condition mk (0) = x̄k .) Hence, the problem of
finding {dkj0 ; 0 ≤ j ≤ k} when x̄ = 0 is the same as finding {dkj ; 0 ≤ j ≤ k} when x̄ = 0.
The latter was done in Dufresne (1989), for the special case bk = k. The same arguments
work, however, for a general sequence {bk ; k ≥ 1}. The main argument is that
k k
j=0 =0
=j
1
= 0
aj − a
(2.9)
for any distinct numbers α0 , . . . , αk . We recall the proof given in Dufresne (1989). Lagrange’s formula for partial fractions decomposition of a rational function is
k
P (x)
1
P (aj )
=
,
(a )
Q(x)
x
−
a
Q
j
j
j=0
5
23-7-2002
The integrated square-root process
for polynomials P and Q, the latter with k + 1 distinct zeros {a0 , . . . , ak }, the former of
degree less than or equal to k. Let
Q(x) =
k
(x − aj ),
P (x) =
j=0
Then
P (x) =
y−x
.
Q(y)
k
y − aj
Q(x)
.
Q(y) (x − aj )Q (aj )
j=0
Letting y → x yields (2.9).
For i = j = 0, we find (from (2.8a, b))
dk00 =
k
bm
m=1
k
=0
=0
1
,
a0 − a
k ≥ 0.
Next, d110 = −d100 = b1 /(a1 − a0 ) (from (2.8c)), which implies
dk10
bk · · · b2
=
d110 =
(a1 − ak ) · · · (a1 − a2 )
k
bm
m=1
k
=0
=1
1
,
a1 − a
k ≥ 1.
To prove the result in general, represent the {dkj0 } on a grid, with k horizontal and j
vertical. For some integer J > 0, suppose that
dkj0 =
k
m=1
bm
k
=0
=j
1
,
aj − a
k ≥ j,
j = 0, . . . , J − 1,
(2.10)
that is, assume the result holds for the lines j = 0, . . . , J − 1. Then, by (2.8c) and (2.9),
(2.10) also holds for k = j = J. From this and (2.8b), we find, for k > J,
dkJ0
bk · · · bJ+1
=
dJJ0 =
(aJ − ak ) · · · (aJ − aJ+1 )
k
m=1
bm
k
=0
=J
1
.
aJ − a
By induction, (2.10) applies for all 0 ≤ j ≤ k. For the coefficients of x̄i , i ≥ 1, the problem
is the same, except that it “starts” on line i, so that, in the above arguments, (j, k) has to
be replaced with (j + i, k + i).
Remark. Similar formulas hold when some of the {a0 , . . . , aK } are identical. The only
difference is that terms of the form t eaj t make their appearance, where is an integer
6
23-7-2002
The integrated square-root process
which depends on the number of times the same ak appears in {a0 , . . . , aK }. For instance,
suppose a0 = 0, a1 = a2 = 0. The solution of (2.2) is then
b1 b2
b1
2 a1 t
m2 (t) = x̄ e + b2 x̄ +
tea1 t − 2 (ea1 t − 1).
a1
a1
The tea1 t reappears in mj (t), j ≥ 3. The solutions of the differential equations nevertheless
depend continuously on the parameters a0 , a1 , . . ., so that the solutions when some of them
are identical can be found by taking limits in the solutions given in the theorem.
As a first application of Theorem 2.1, we generalize Proposition 2 of Dufresne (1989).
Suppose {St } satisfies
dSt = (ηSt + ζ)dt + σSt dWt ,
S0 = x̄.
(2.11)
Then the moments mk (t) = E Stk , k = 0, 1, . . ., satisfy Eqs.(2.3-2.4), with
ak = kη + k(k − 1)σ 2 /2,
bk = kζ.
(2.12)
Corollary 2.2. If the {ak } are all distinct, the moments of the process in (2.11) are given
by (2.4), with {ak } as in (2.12) and
dkj = k!
j
k
x̄i ζ k−i i=0
i!
=i
=j
1
,
aj − a
j = 0, . . . , k.
Remark. The process S in (2.11) has been called the Courtadon interest rate model
(Courtadon, 1982). The moments of the spot rate are thus given by Corollary 2.2. The
limit distribution of St as t tends to infinity (inverse Gamma if η < σ 2 /2, ∞ otherwise)
is derived in Dufresne (1990). In that paper, St represents the market value of a an initial
amount S0 and of a continuous payment stream of ζ units, all invested in a security
represented by a geometric Brownian motion.
Theorem 2.3. Suppose α = 0. If Xt satisfies (1.1), then its moments are
E Xtk
=
k
θkj eαjt ,
k = 0, 1, . . . ,
j=0
where
θkj
j
k!(−1)k−j ūk−i (v̄)k
=
x̄
,
i!(j − i)!(k − j)! (v̄)i
i=0
ū =
γ2
,
2α
(y)0 = 1,
i
0≤j≤k
2β
γ2
v̄ =
(y)k = y(y + 1) · · · (y + k − 1),
7
k ≥ 1.
23-7-2002
The integrated square-root process
Proof. We have
k
bm
m=1
2
γ2
γ2
γ
γ2
= k!
k+ β−
···
1+ β−
2
2
2
2
2 k 2β
2β
γ
+ k − 1 ··· 2
= k!
2
2
γ
γ
2 k
γ
= k!
(v̄)k .
2
For i ≤ j,
k
(aj − a ) = α
k−i
=i
=j
j−1
k
(j − )
=i
(j − )
=j+1
= (−1)k−j αk−i (j − i)!(k − j)!.
Hence
k
bm
m=i+1
k
=i
=j
1
k!
=
aj − a
i!
γ2
2α
k−i
(v̄)k
(−1)k−j
.
(v̄)i (j − i)!(k − j)!
The MGF of Xt will now be obtained by summing the moments of Xt . We begin by
assuming that the following sum converges:
∞
sk
k=0
k!
E Xtk
=
∞
j=0
aj t
e
∞
sk
k=j
k!
θkj =
∞
k
sk k=0
k!
aj t
e
j=0
j
i=0
x̄i
k!(−1)k−j ūk−i (v̄)k
. (2.13)
i!(j − i)!(k − j)! (v̄)i
This sum will be evaluated by summing first over k, then over j, and finally over i. This
procedure needs to be justified, as the summands are not all of the same sign. The justification is given below. We will use the formula
(1 − y)
−c
=
∞
(c)n
n=0
yn
,
n!
(2.14)
which is valid for c ∈ R, |y| < 1.
For fixed i ≤ j and small enough |s|, we have
∞
k=j
x̄i
eaj t sk k!(−1)k−j ūk−i (v̄)k
(v̄)j sj ūj−i eaj t
(1 + sū)−v̄−j .
= x̄i
k! i!(j − i)!(k − j)! (v̄)i
(v̄)i i!(j − i)!
8
23-7-2002
The integrated square-root process
Next,
∞
(v̄)j sj ūj−i eaj t
(1 + sū)−v̄−j
x̄
(v̄)i i!(j − i)!
j=i
i
∞
(v̄)j sj−i ūj−i eαt(j−i)
si x̄i eαit
=
(1 + sū)−v̄−i
(1 + sū)i−j
i!
(v̄)
(j
−
i)!
i
j=i
−v̄−i
si x̄i eαit
sūeαt
−v̄−i
=
(1 + sū)
1−
i!
1 + sū
i i αit −v̄−i
s x̄ e
=
.
1 − sū(eαt − 1)
i!
Finally, summing over i,
sXt
Ee
−v̄
= 1 − sū eαt − 1
exp
sx̄eαt
1 − sū (eαt − 1)
= φ(s)v̄ exp [λt (φ(s) − 1)] . (2.15)
Here
−1
φ(s) = (1 − sµt )
,
γ2
µt =
2
eαt − 1
α
λt =
,
2αx̄
.
γ 2 (1 − e−αt )
The function φ(s) is the MGF of an exponential distribution, with mean equal to µt .
The distribution of Xt is thus the distribution of Xt + Xt , where the two variables are
independent, Xt ∼ Gamma(v̄, µt ), and Xt is compound Poisson
Xt =
N
Ui ,
i=1
with N ∼ Poisson(λt ), Ui ∼ Exp(µt ).
We now show that the sum in (2.13) converges. What we have just done is to sum the
quantities
sk x̄i (−1)k−j ūk−i eaj t (v̄)k
,
0 ≤ i ≤ j ≤ k < ∞.
cijk =
i!(j − i)!(k − j)!(v̄)i
The same steps show that
−v̄
|cijk | = 1 − sū eαt + 1
exp
0≤i≤j≤k<∞
sx̄eαt
,
1 − sū (eαt + 1)
if 0 ≤ |s| < 1/[ū (eαt + 1)]. Therefore: (1) the sum in (2.13) is convergent, (2) the order of
summation of the cijk is irrelevant, and (3) the result is an analytic function of s, at least
for 0 ≤ |s| < 1/[ū (eαt + 1)]. Expression (2.15) is therefore correct for s < 1/[ū (eαt − 1)],
by analytic continuation.
9
23-7-2002
The integrated square-root process
Obviously, the limit distribution of Xt as t → ∞ exists if, and only if, α < 0, in which
case it is Gamma(2β/γ 2 , −γ 2 /(2α)) (the compound Poisson part, which depends on the
initial condition x̄, disappears in the limit). Finally, the density of Xt may be obtained
by conditioning on N . To simplify the algebra, consider U = Xt /µt : with probability
e−λt λnt /n!, the distribution of U is Gamma(v̄ + n, 1), for n = 0, 1 . . .. Hence, the density
of U is
v̄−1
∞
∞ √
n
v̄+n−1
2
λ
( λt y)v̄−1+2n
y
y
−λt t
−y
−λt −y
e
e
e 1{y>0} =
1{y>0}
fU (y) =
n! Γ(v̄ + n)
λt
n!Γ(v̄ + n)
n=0
n=0
v̄−1
2
y
=
e−λt −y Iv̄−1 (2 λt y)1{y>0} ,
λt
where Iν is the modified Bessel function of the first kind of argument ν (Lebedev, 1972,
p.108):
∞
(z/2)ν+2k
Iν (z) =
,
| arg z| < π,
ν ∈ C.
(2.16)
Γ(k + 1)Γ(k + ν + 1)
k=0
The density of Xt is thus
v̄−1
√
1 xe−αt 2 −λt −x/µt
4α
e
Iv̄−1
xx̄eαt 1{x>0} .
fXt (x) =
µt
x̄
γ 2 (eαt − 1)
(2.17)
(When x̄ = 0 this reduces to the Gamma(v̄, µt ) density.)
The MGF and the density above may also be obtained from the relationship between
the solution of the SDE (1.1) and Bessel processes (see Revuz & Yor, 1999, Chapter 11).
For a constant p > 0, the process X̃t = pXt satisfies
t
t
√
X̃t = px̄ +
X̃s dWs
(αX̃s + βp)ds + γ p
0
0
2
Letting p = 4/γ , we find that
X̃t = x̃ +
t
t
X̃s dWs ,
(αX̃s + β̃)ds + 2
0
0
where x̃ = 4x̄/γ 2 , β̃ = 4β/γ 2 . Now, consider the square of a δ-dimensional Bessel process,
that is, the unique strong solution of
t
Zs dBs
Zt = Z0 + δt + 2
0
(B standard Brownian motion), and let g(t) = (1 − e−αt )/α (see Revuz & Yor, 1999, Exercise (1.13), p.448). Integrating by parts, and using the time change formula for stochastic
integrals (Revuz & Yor, 1999, p.180), we find
t
αt
αt
Z0 + δg(t) + 2
Zg(s) dBg(s)
e Zg(t) = e
0
t
αs
= Z0 + α
0
10
t
e dg(s) + 2
eαs Zg(s) eαs/2 dBg(s) .
αs
e Zg(s) ds + δ
0
t
0
23-7-2002
The integrated square-root process
The process
t
eαs/2 dBg(s)
B̃t =
0
is a continuous martingale with quadratic variation equal to t, and is thus standard Brownian motion (with respect to its own filtration). Hence, the process Z̃t = eαt Zg(t) satisfies
t
t
(αZ̃s + δ)dt + 2
Z̃t = Z0 +
Z̃s dB̃s .
0
0
Consequently, the processes {Z̃t ; t ≥ 0} and {X̃t ; t ≥ 0} have the same distribution, if
Z0 = x̃ =
4x̄
,
γ2
δ = β̃ =
4β
.
γ2
2
Finally, the process {Xt ; t ≥ 0} has the same distribution as { γ4 eαt Zg(t) ; t ≥ 0}. The MGF
of Zτ is (Revuz & Yor, 1999, p.441)
E eρZτ = (1 − 2ρτ )−δ/2 exp [ρZ0 /(1 − 2ρτ )] .
It can be checked that appropriate substitutions yield (2.15). In the same fashion, the
density of Zτ is
12 ( δ2 −1)
√
1
z
Z0 z
−(Z0 +z)/2τ
e
fZτ (z) =
I δ −1
1{z>0} .
2
2τ Z0
τ
This implies that the density of Xt is (for x > 0)
√
−αt 12 (v̄−1)
−αt
4x̄ 4xe−αt
xe
4α
xx̄e
2αe−αt
exp −
+
α [2(1 − e−αt )] Iv̄−1
γ 2 (1 − e−αt )
x̄
γ2
γ2
γ 2 (1 − e−αt )
√
1 (v̄−1)
−αt
4α
xx̄e
1 xe−αt 2
e−λt −x/µt Iv̄−1
,
=
µt
x̄
γ 2 (1 − e−αt )
which agrees with (2.17).
It is also possible to find an expression for E Xtp for non-integral p. One could integrate
the density times xp term by term; an alternative is the following: since
1
1
=
yq
Γ(q)
we have
E Xt−q
=
=
=
=
∞
sq−1 e−sy ds,
q, y > 0,
(2.18)
0
∞
1
sq−1 E e−sXt ds
Γ(q) 0
∞
1
sq−1 φ(−s)v̄ eλt (φ(−s)−1) ds
Γ(q) 0
−λt 1
µ−q
t e
rv̄−q−1 (1 − r)q−1 eλt r dr
Γ(q)
0
−λt Γ(v̄ − q)
µ−q
1 F1 (v̄ − q, v̄; λt ),
t e
Γ(v̄)
11
(2.19)
(2.20)
23-7-2002
The integrated square-root process
where 1 F1 is the confluent hypergeometric function
1 F1 (r, s; z) =
∞
(r)n z n
,
(s)
n!
n
n=0
r, s, z ∈ C,
s = 0, −1, . . .
Here we have used the integral representation (9.11.1), p.266 of Lebedev (1972) for the
confluent hypergeometric function. The above calculation is seen to be correct for 0 ≤ q <
v̄, while E Xt−q is infinite for q ≤ v̄ (this is because the integral in (2.19) tends to infinity
as q ↑ v̄). Since E Xt−q is an analytic function of q, and (2.20) is analytic for Re(−q) > −v̄,
we get the following result.
Γ(v̄ + p)
1 F1 (v̄ + p, v̄; λt ),
Γ(v̄)
Theorem 2.4. E Xtp = µpt e−λt
p > −2β/γ 2 .
This formula may be reconciled with the one in Theorem 2.3 by appealing to the
identity (Lebedev, 1972, p.267)
1 F1 (r, s; z)
= ez 1 F1 (s − r, s; −z),
s = 0, −1, . . .
If p = k, a non-negative integer, then E Xtk may be expressed as a series which terminates:
E Xtk = µkt
Γ(v̄ + k)
1 F1 (−k, v̄; −λt )
Γ(v̄)
k
(−1)i (−k)i (v̄)k µit λit k−i
µ
=
(v̄)i
i! t
i=0
=
k
i=0
=
k
i=0
=
k
j=0
k−i k!(v̄)k x̄i eαit k−i k − i αt
ū
e (−1)k−i−
(k − i)!(v̄)i i!
=0
k! (v̄)k x̄i eαit k−i (k − i)!
ū
eα(j−i)t (−1)k−j
(k − i)!(v̄)i i!
(j
−
i)!(k
−
j)!
j=i
k
αjt
e
j
x̄i
i=0
k!(−1)k−j ūk−i (v̄)k
.
i!(j − i)!(k − j)!(v̄)i
3. Moments of the integrated square-root process
In this section, two techniques are given for the calculation of the moments of the
integral of the square-root process {Yt }.
Suppose X is the solution of SDE (1.1), and define
Yt =
t
Xs ds,
Mjk (t) = E Ytj Xtk ,
0
12
23-7-2002
The integrated square-root process
for non-negative integers j, k; Mjk is finite for all t ≥ 0, since Yt has finite moments of
all order (since the Laplace transform of Yt exists in a neighbourhood of the origin, see
Section 4). Applying Ito’s formula, we get
d(Ytj Xtk ) = Xtk d(Ytj ) + Ytj d(Xtk )
k− 12
= jYtj−1 Xtk+1 dt + Ytj (ak Xtk + bk Xtk−1 )dt + γkYtj Xt
dWt .
Since E Xtp < ∞ for all positive p, it follows that
t
2
1
j k− 2
ds < ∞
γkYs Xs
E
0
for all j, k ∈ N, and so
t
k− 12
γkYsj Xs
E
dWs = 0.
0
This implies
d
(3.1)
Mjk (t) = ak Mjk (t) + bk Mj,k−1 (t) + jMj−1,k+1 (t).
dt
These are linear ordinary differential equations with constant coefficients, and their solutions are straightforward (though tedious to obtain by hand). In order to find Mj0 (t), it is
necessary to calculate Mj−1,1 (t), Mj−2,2 (t), and so on. If we represent the {Mjk } on a grid,
with j horizontal and k vertical, Mj0 is obtained after solving the differential equations on
the diagonals M0,i , . . . , Mi,0 for i ≤ j. The following result is proved by suitably adapting
the arguments used in Section 2.
Theorem 3.1. Suppose α = 0, and that X satisfies (1.1). Then, for j, k ∈ N,
j+k
Mjk (t) =
Mjkm (t)eαmt ,
(3.2)
m=0
where Mjkm (t) is a polynomial in t with degree
j
if 0 ≤ m ≤ k
deg Mjkm ≤
j + k − m if k + 1 ≤ m ≤ j + k.
(3.3)
Proof. We proceed by induction on j and then on k. The theorem is correct for j = 0
and all k ≥ 0 by Theorem 2.3. Suppose (3.2)-(3.3) are correct for j = 0, . . . , J − 1 and all
k ≥ 0, for some J ≥ 1. Based on this assumption, the solution of (3.1) is
t
ak t
ak t
e−ak s RJk (s)ds,
(3.4)
MJk = e MJk (0) + e
0
where
RJk (t) = bk MJ,k−1 (t) + JMJ−1,k+1 (t)
=
J+k−1
[bk MJ,k−1,m (t) + JMJ−1,k+1,m (t)]eam t + JMJ−1,k+1,J+k (t)eaJ+k t .
m=0
13
23-7-2002
The integrated square-root process
We show that this implies that (3.2)-(3.3) hold for j = J, k = 0. Since a0 = b0 = MJ0 (0) =
0, (3.4) reduces to
J t
MJ0 (t) = J
MJ−1,1,m (s)eam s ds.
(3.5)
0
m=0
Consider the terms corresponding to m = 1, . . . , J in the sum above. Recall that am =
αm = 0 and that, for any constant A = 0,
t
eAs sn ds
0
is the sum of a constant and of a polynomial of degree n times eAt . Hence
t
MJ−1,1,m (s)eam s ds
0
is the sum of a constant and of a polynomial of the same degree as MJ−1,1,m times eam t .
Since a0 = 0, he first term on the right hand side of (3.5) is seen to be a polynomial
of degree equal to one plus the degree of MJ−1,1,0 .
The above considerations imply that
J
MJ0 (t) =
MJ0m (t)eαmt ,
m=0
where the functions MJ0m (t) are polynomials in t. Moreover, the induction assumption
says in particular that
J −1
if 0 ≤ m ≤ 1
deg MJ−1,1,m (t) ≤
J − m if 2 ≤ m ≤ J
and, consequently, what we have just seen also means that
J
if m = 0
deg MJ0m ≤
J − m if 1 ≤ m ≤ J.
Keeping J fixed, make another induction assumption: (3.2)-(3.3) hold for j = J and
k = 0, . . . , K − 1, for some K ≥ 1. The proof will be finished once we show that (3.2)-(3.3)
hold for j = J and k = K as well. By the induction assumptions and (3.4),
MJK (t) = eaK t
J+K−1 m=0
t
[bk MJ,K−1,m (s) + JMJ−1,K+1,m (s)]e(am−aK )s ds
0
+
t
(3.6)
JMJ−1,K+1,J+K (s)e(aJ+K −aK )s ds .
0
14
23-7-2002
The integrated square-root process
By the same reasoning as before, we see that MJK is of the form
MJK (t) =
J+K
MJKm (t)eam t ,
m=0
where MJKm (t) is a polynomial. In (3.6), the terms of the sum inside the curly brackets
with m = K reduce to
constant + polynomial × e(am −aK )t
The degree of the polynomial is the same as that of
bk MJ,K−1,m (s) + JMJ−1,K+1,m (s),
that is, no larger than
max[min(J, J + K − 1 − m), min(J − 1, J + K − m)] = min(J, J + K − m).
(3.7)
The same applies to the term following the sum, which corresponds to m = J + K. For
the term m = K inside the curly brackets, the integral reduces to a polynomial of degree
no larger than
1 + max[deg(MJ,K−1,K ), deg(MJ−1,K+1,K ) = 1 + max(J − 1, J − 1) = J.
(3.8)
All these integrals are then multiplied by eaK t , so that (3.7) becomes the degree of MJKm
for m = K, and (3.8) becomes the degree of MJKK . This ends the proof.
A general simplified formula for the polynomials Mjkm has not been found, but they
can be calculated recursively, as the next theorem shows.
Theorem 3.2. If we let
j∧(j+k−m)
Mjkm (t) =
Mjkmn tn ,
n=0
Rjkmn = bk Mj,k−1,m,n + jMj−1,k+1,m,n ,
then
j∧(j+k−m)
Mjkmn = −
i=n
(n + 1)i−n
Rjkmi ,
[α(k − m)]i−n+1
1
Mjkkn = − Rj,k,k,n−1 ,
n
Mjkk0 = −
j+k
Mjkm0 ,
k = m
(3.9)
n = 1, . . . , j
(3.10)
j≥1
(3.11)
(see Theorem 2.3)
(3.12)
m=0
m=k
M0km0 = θkm
M0kmn = 0,
∀ (k, m)
n ≥ 1.
15
(3.13)
23-7-2002
The integrated square-root process
Proof. By Theorem 3.1, the derivative on the left hand side of Eq.(3.1) equals
j+k
j+k
j
j
d am t n
am t
e
Mjkmn t =
e
[am Mjkmn + (n + 1)Mj,k,m,n+1 ]tn .
dt m=0
n=0
m=0
n=0
Here, in order to simplify the notation, we have added some Mjkmn which are necessarily
0, since n > min(j, j + k − m). The right hand side of (3.1) may be written as
j+k
m=0
am t
e
j
[ak Mjkmn + bk Mj,k−1,m,n + jMj−1,k+1,m,n ]tn .
n=0
Hence,
Mj,k,m,n+1 =
1
[(ak − am )Mjkmn + Rjkmn ] .
n+1
(3.14)
If k = m, then this is (3.10). If k = m, it can be verified that (3.9) is the solution of (3.14)
that satisfies the condition
Mj,k,m,1+j∧(j+k−m) = 0
(which must hold by Theorem 3.1). Eq.(3.11) results from Mjk (0) = 0 for any j ≥ 1, while
(3.12)-(3.13) follow from Theorem 2.3.
Eqs.(3.9)-(3.12) are easy to program, and give the explicit formulas for any moment
of Yt much faster than solving the differential equations (3.1). The recursion must proceed through the values (j, k) = (0, 0), (0, 1), (1, 0), (0, 2), (1, 1), (2, 0), and so on, up to the
highest required moment E YtJ = MJ0 (t). Observe that those formulas only need to be
generated symbolically once. The first three moments of Yt are:
x̄
tβ
β
β tα x̄
+e
+
E Yt = − − 2 −
α α
α
α α2
2x̄β
x̄2
2x̄β
β2
x̄γ 2
5βγ 2
2β 2
βγ 2 t2 β 2
E Yt2 = 2 + 3 + 4 − 3 −
+ 2
+
t
+
−
α
α
α
α
2α4
α2
α3
α3
α
2x̄2
2
2
2
4x̄β
2β
2βγ
2x̄β
2β
2x̄γ 2
2βγ 2 + etα − 2 − 3 − 4 +
+
t
−
−
−
−
α
α
α
α4
α2
α3
α2
α3
x̄2
2x̄β
β2
x̄γ 2
βγ 2 + e2tα 2 + 3 + 4 + 3 +
α
α
α
α
2α4
3
2
2
3
2 2
x̄
3x̄ β
3x̄β
β
3x̄ γ
21x̄βγ 2
15β 2 γ 2
3x̄γ 4
11βγ 4
3
−
− 6+
+
+
− 5 −
E Yt = − 3 −
α
α4
α5
α
α4
2α5
2α6
α
α6
3x̄2 β
2
3
2
2 2
4
6x̄β
3β
6x̄βγ
21β γ
3βγ
+t −
−
− 5 +
+
−
3
4
4
5
α
α
α
α
2α
α5
3x̄β 2
3β 3
3β 2 γ 2 t3 β 3
−
+
− 3
+ t2 −
α3
α4
α4
α
3x̄3
2
2
3
9x̄ β
9x̄β
3β
3x̄2 γ 2
33x̄βγ 2
27β 2 γ 2
3x̄γ 4
15βγ 4
+ etα 3 +
+
+
−
−
−
−
+
α
α4
α5
α6
α4
2α5
2α6
2α5
2α6
16
23-7-2002
The integrated square-root process
12x̄β 2
6β 3
6x̄2 γ 2
9x̄βγ 2
3β 2 γ 2
3x̄γ 4
9βγ 4 +
+
+
−
−
−
α3
α4
α5
α3
α4
α5
α4
α5
3x̄β 2
3β 3
6x̄βγ 2
6β 2 γ 2
3x̄γ 4
3βγ 4
+
+
+
+
+
+ t2
α3
α4
α3
α4
α3
α4
3x̄3
2
2
3
2 2
9x̄ β
9x̄β
3β
3x̄ γ
3x̄βγ 2
9β 2 γ 2
3x̄γ 4
3βγ 4
+ e2tα − 3 −
−
−
−
+
+
+
+
α
α4
α5
α6
α4
2α5
2α6
α5
α6
3x̄2 β
2
3
2 2
2
2 2
4
4 6x̄β
3β
6x̄ γ
15x̄βγ
15β γ
6x̄γ
3βγ
+t −
−
− 5 −
−
−
− 4 −
3
4
3
4
5
α
α
α
α
α
2α
α
α5
x̄3
3x̄2 β
3x̄β 2
β3
3x̄2 γ 2
9x̄βγ 2
3β 2 γ 2
3x̄γ 4
βγ 4 3tα
+e
.
+
+
+ 6+
+
+
+
+
α3
α4
α5
α
α4
2α5
2α6
2α5
2α6
+t
6x̄2 β
+
4. Laplace transform, inverse moments and density of Yt
In this section and the next one, the symbol O (“big oh”) has the following meaning:
we write
g(s) = O(h(s))
as s → ∞
if |g(s)/h(s)| remains bounded as s → ∞. In words, g(s) is big oh of h(s) as s tends to
infinity if |g(s)| is no larger than some constant times |h(s)| when s is large. We write
g(s) =
O (h(s))
as
s→∞
(“g(s) is little oh of h(s) as s → ∞”) if
g(s)
= 0.
s→∞ h(s)
lim
Another symbol used below is “∼”, meaning “asymptotically equal to”: we write g(s) ∼
h(s) if
g(s)
= 1.
lim
s→∞ h(s)
Finally, “i” is now the imaginary unit of the complex numbers (that is, i2 = −1).
Since the Laplace transform of the integral of the squared Bessel process may be
derived explicitly (see Revuz & Yor, 1999, p.445), it is not surprising that the Laplace
transform of the square-root process also has an explicit expression. A quick way to find
E exp(−sYt ) is to suitably modify the CIR formula for the price of a zero-coupon bond
(Cox et al., 1985). Suppose the short rate follows a process of type (1.1); the bond price
is known to be the expectation of the exponential of minus the integral of the short rate.
Multiplying the spot rate by a positive number yields another process which also satisfies
(1.1), but with different parameters. Rewriting the bond price formula for this new process
immediately yields the Laplace transform of Yt . The details follow.
Suppose X is the solution of (1.1), let s > 0, and define
t = sXt ,
X
17
0 = x
X
= sx̄.
23-7-2002
The integrated square-root process
Then
t = x
+
X
√
t
u + β)du
(αX
+
0
t
u dWu ,
γ
X
0
where β = βs amd γ
= γ s. The formula for the price of a zero-coupon bond maturing
in t years is (Cox et al., 1985)
t
du
−
X
E e−sYt = E e 0 u
√
2γ̃2β̃
( α2 +2γ̃ 2 −α)t/2
2
2
γ e
2 α + 2
√ 2
=
2
( α2 + 2
γ 2 − α)(e α +2γ̃ t − 1) + 2 α2 + 2
γ2
√ 2
2
2(e α +2γ̃ t − 1)
√ 2
× exp −
x 2
( α2 + 2
γ 2 − α)(e α +2γ̃ − 1) + 2 α2 + 2
γ2
γ2β2
sx̄
2 sinh(P t/2)
e−αt/2
exp −
, (4.1)
=
cosh(P t/2) − Pα sinh(P t/2)
P cosh(P t/2) − Pα sinh(P t/2)
where P = P (s) = α2 + 2γ 2 s.
The Laplace transform of Yt is finite in a neighborhood of the origin. This is seen by
noting that
α
cosh(P t/2),
sinh(P t/2)
P
are both analytic functions of P , and that their difference does not vanish at P (0) = |α|,
for any values of the parameters α, β, γ considered, and for any t > 0. Consequently, (1)
all the moments of Yt are finite, and (2) they are given by the familiar formula
E Ytk = (−1)n
dk
E e−sYt
,
s=0
dsk
k ∈ N.
The Laplace transform gives a third method of calculating the moments of Yt . The author
has compared the computation times for the moments of Yt up to order 6 using Mathematica; the recursive method of Section 3 does a lot better than the others, especially for
higher moments. As an example, on what is now a rather slow machine (Macintosh 3400
180 Mhz), Mathematica took 377 seconds to find the general symbolic formula for the
6th moment by differentiating the Laplace transform, and a bit more to find the general
formulas for moments of order 1 to 6 by solving differential equations (3.1). The recursive
method produced all 6 moments in a little more than 8 seconds. A program recursively
calculating just the numerical values of the moments (rather than the general symbolic
formulas) would of course execute much faster, especially in C.
The next theorem concerns the moments and MGF of 1/Yt .
Theorem 4.1. (a) E Ytr is finite for all r ∈ R, and
∞
1
−q
E Yt
=
sq−1 E exp(−sYt )ds,
Γ(q) 0
18
q > 0.
23-7-2002
The integrated square-root process
∞
1
p
√
√
= 1+ p
s− 2 I1 (2 ps)E e−sYt ds for all p ≥ 0.
(b) E exp
Yt
0
1
2
(βt + x̄) .
2
2γ
x̄ 2
1
1 ds = ∞ for all p ≥ 2 β +
.
Xs
2γ
t
This is finite if, and only if, 0 ≤ p <
(c) E exp p
0
t
Proof. By (2.14),
√
P = γ 2s 1 +
√ −1 α2
2s
1
+
O
s
.
=
γ
2γ 2 s
From this, we can find the asymptotic behaviour of the first factor in (4.1):
e−αt/2
cosh(P t/2) − Pα sinh(P t/2)
γ2β2
2β
= 2 γ 2 e−αβt/γ
= 2
2β
γ2
2
1−
2β
α P t/2 α −P t/2 − γ 2
+ 1+
e
e
P
P
−αβt/γ 2 −βP t/γ 2
e
2β
∼ 2 γ 2 e−αβt/γ
2
2β
α α −P t − γ 2
1−
+ 1+
e
P
P
√
−βt 2s/γ
as s tends to infinity. Now turn to the second factor:
−
sx̄
2sx̄
2 sinh(P t/2)
1
= −
α
2
2
P cosh(P t/2) − P sinh(P t/2)
α + 2γ s coth(P t/2) − √
√
2sx̄
= −
γ
√
= −
α2
1+ 2
2γ s
α
α2 +2γ 2 s
− 12
1
1+e−P t
1−e−P t
−
α
√
γ 2s
1+
2sx̄ 1
1 + O s−1
γ
1 + O(e−P t ) − √α
γ
√
2s
1
α2 − 2
2γ 2 s
3
+ O s− 2
−1 −1 2sx̄ α
1+ √ +O s
1+O s
= −
γ
γ 2s
√
1
2sx̄ αx̄
= −
− 2 + O s− 2 .
γ
γ
From these expressions we get
E e−sYt ∼ 2
2β
γ2
exp −(βt + x̄)
19
√ α
2s
+
2
γ
γ
as
s → ∞.
(4.2)
23-7-2002
The integrated square-root process
By this asymptotic identity and (2.18), E Yt−q is finite if, and only if, the integral
√
∞
2β
α
1
2s
−
(βt+x̄)
sq−1 exp −
(βt + x̄) ds,
2 γ2 e γ2
Γ(q) 0
γ
is finite, which is true for all q > 0. This proves part (a) of the theorem.
Now turn to part (b). For p ≥ 0,
E exp
p
Yt
=
=
=
=
∞
pn
E Yt−n
n!
n=0
∞
∞
pn sn−1
1+
E exp(−sYt )ds
Γ(n)Γ(n + 1)
0
n=1
∞ √
∞
(2 ps/2)1+2k
p
E exp(−sYt )ds
1+
s
Γ(k + 1)Γ(k + 2)
0
k=0
∞
p
√
I1 (2 ps) E exp(−sYt )ds
1+
s
0
(see the definition of the modified Bessel function Iν in (2.16)).
It remains to be seen for what p > 0 the above integral is finite. The integrand is
continuous in s, and bounded near the origin. It is known that
ex
,
Iν (x) ∼ √
2πx
x→∞
(4.3)
(Lebedev, 1972, p.123). Combining this with (4.2), we find that
√
1
p
√
−3/4
exp
2s
2p − (βt + x̄)
I1 (2 ps) E exp(−sYt ) ∼ Cs
s
γ
as s → ∞ (here C > 0 depends on p but not on s). The function on the right is integrable
over [1, ∞) if, and only if,
1
(βt + x̄)2 .
p <
2
2γ
To prove (c), note that by the Cauchy-Schwarz inequality
2
t
≤
dτ
0
0
and thus
p
0
t
t
dτ
Xτ
t
Xτ dτ,
0
dτ
pt2
.
≥ t
Xτ
Xτ dτ
0
20
23-7-2002
The integrated square-root process
The result follows from (b).
By contrast, it is easy to see that the density of Xt behaves like xv̄−1 as x ↓ 0, with
v̄ = 2β/γ 2 (see (2.17) and Theorem 2.4), and thus E Xtp < ∞ for p > −v̄ only, whatever
the initial condition x̄ ≥ 0, and, consequently, E exp(q/Xt ) = ∞ for any q > 0. Thus, the
density of Yt tends to 0 much quicker than that of Xt , as the argument tends to 0. This is a
little unexpected, especially since Y0 = 0 with probability one, irrespective of X0 = x̄. An
intuitive explanation is that Yt is close to 0 if Xs is close to 0 for “many” s between 0 and
t, which is less likely than Xs being close to 0 for one particular s; integration is therefore
a “smoothing” operation in this case. This relates to Theorem 4.2 below, which says that
the density of Yt is “infinitely flat” at the origin (that is, all its derivatives vanish).
Part (c) of the theorem is related to absence of arbitrage and changes of measures in
the financial model
dS = µS dt +
√
XS dV,
where X satisfies (1.1) and (V, W ) is two-dimensional Brownian motion (possibly correlated). If one attempts the usual Girsanov change of measure to make {e−rt St } a martingale, then one has the problem of checking that the stochastic exponential of minus the
integral of the market price of risk
µ−r
√
X
has expectation equal to one (which would make it a martingale, see Karatzas & Shreve
(1998), p.12). A sufficient condition for this to hold is Novikov’s condition
t
(µ − r)2
1
ds < ∞.
E exp
2 0
Xs
Theorem 4.1 shows that Novikov’s condition fails if
(µ − r)
2
x̄ 2
1 ≥ 2 β+
γ
t
(but does not say that the condition holds otherwise).
Observe that, by Theorem 2.4, E(1/Xs ) = ∞ for all s > 0 if 2β/γ 2 ≤ 1, which implies
t
E
0
1
ds = ∞,
Xs
t
p
ds/Xs
= ∞ for all p > 0. More generally, since E Xt−q = ∞ for all q ≥ 2β/γ 2 ,
and thus E e 0
t
it is plausible that not all moments of 0 X1s ds are finite (though the author has not been
t
p
ds/Xs
able to prove this), which would then imply E e 0
= ∞ for all p > 0, for any β, γ
and t > 0.
21
23-7-2002
The integrated square-root process
Theorem 4.2. Suppose t > 0 and let gt (·) be the density of Yt with respect to Lebesgue
measure on R. Then gt (x) exists and is a continuous, uniformly bounded function of
x ∈ R. The same applies to (dn /dxn )gt (x), for any n ≥ 1.
Proof. Apply the following well-known property of characteristic functions (see Theorem
3 and Lemma 2 in Feller (1971), pp.509 and 512): Suppose a r.v. U satisfies
∞
−∞
|E eiζU ζ n |dζ < ∞
for some n ≥ 1; then the distribution of U has a density which is everywhere continuous
and uniformly bounded, and the nth derivative of this density is also everywhere continuous
and uniformly bounded.
The characteristic function of Yt is
ψ(ζ) = E eiζYt = E e−sYt
=
s=−iζ
e−αt/2
α
cosh(Zt/2) − Z
sinh(Zt/2)
where Z = Z(ζ) =
γ2β2
2 sinh(Zt/2)
iζ x̄
,
exp
α
Z cosh(Zt/2) − Z
sinh(Zt/2)
α2 − 2γ 2 ζi. First, consider ψ(ζ) as ζ → ∞. We have (see (2.14))
Z(ζ) − γ(1 − i) ζ = γ(1 − i) ζ
α2
1− 2 −1
2γ ζi
∞
= γ(1 − i) ζ
k=1
α2
2γ 2 ζi
k
(− 12 )k
k!
→ 0
as ζ → ∞. This implies
eZt/2 ∼ eγ(1−i)
√
ζt/2
α
1 γ(1−i)√ζt/2
cosh(Zt/2) − sinh(Zt/2) ∼ e
.
Z
2
Moreover,
iζ x̄
2 sinh(Zt/2)
2iζ x̄
1
=
α
Z cosh(Zt/2) − Z sinh(Zt/2)
Z coth(Zt/2) −
α
Z
√ − 12
(−1 + i)x̄ ζ
α2
=
1− 2
γ
2γ ζi
22
1
1+e−Zt
1−e−Zt
−
α √
γ(1−i) ζ
1−
− 12
α2
2γ 2 ζi
23-7-2002
The integrated square-root process
√
(−1 + i)x̄ ζ =
1 + O(ζ −1 )
γ
1 + O(e−Zt ) −
1
α √
γ(1−i) ζ
1 + O(ζ −1 )
√
α
(−1 + i)x̄ ζ −1
−1
√ + O(ζ )
=
1 + O(ζ ) 1 +
γ
γ(1 − i) ζ
√
1
(−1 + i)x̄ ζ
αx̄
=
− 2 + O(ζ − 2 )
γ
γ
Putting all this together, we find
|ψ(ζ)| ∼ C1 e−C2
√
ζ
as ζ → ∞, where C1 , C2 > 0 do not depend on ζ. Hence
∞
|ψ(ζ)ζ n |dζ < ∞
0
for any n ≥ 0. The integral from −∞ to 0 is handled similarly: as ζ → −∞,
Z(ζ) − γ(1 + i) −ζ → 0,
√
eZt/2 ∼ eγ(1+i) −ζt/2
1 γ(1+i)√−ζt/2
α
cosh(Zt/2) − sinh(Zt/2) ∼ e
Z
2
√
1
iζ x̄
2 sinh(Zt/2)
(1 + i)x̄ −ζ
αx̄
= −
− 2 + O((−ζ)− 2 )
α
Z cosh(Zt/2) − Z sinh(Zt/2)
γ
γ
√
|ψ(ζ)| ∼ C3 e−C4 −ζ .
5. Relationships between the Laplace transforms of U and 1/U
The formula in Theorem 4.1 (b) applies for any random variable U > 0. The same
proof leads to the identity
∞
p
√
p/U
I1 (2 ps) E exp(−sU )ds,
= 1+
p≥0
(5.1)
Ee
s
0
(both sides are simultaneously finite or infinite, by Fubini’s Theorem). Formula (5.1)
strangely resembles a known relationship for Laplace transforms (Oberhettinger & Badii,
1973, p.4):
0
∞
e−pt tν−1 f (t−1 )dt =
0
∞
∞
ν2
s
√
−st
Jν (2 ps)
e f (t)dt ds,
p
0
23
ν > −1. (5.2)
23-7-2002
The integrated square-root process
Here, Jν is the ordinary Bessel function of order ν (Lebedev, 1972, p.102),
∞
Jν (z) =
k=0
(−1)k (z/2)ν+2k
,
Γ(k + 1)Γ(k + ν + 1)
| arg z| < π,
ν ∈ C.
Let us rephrase (5.2) in probabilistic terms, by considering that f (·) is the density of a
random variable U > 0:
EU
−ν−1 −p/U
e
∞
=
0
ν2
s
√
Jν (2 ps)E e−sU ds,
p
ν > −1.
(5.3)
(Oberhettinger & Badii do not specifiy for which p this holds, but this is clarified in the
theorem below.) Observe that ν = −1 is not allowed here, so that (5.3) does not apply
to E e−p/U . We now derive some more general formulas, which have (5.1) and (5.3) as
particular cases.
Theorem 5.1. Suppose p > 0, r ∈ R and U > 0 a.s.
(a) If ν > −1, then
EU
r−ν−1 p/U
e
∞
=
0
ν2
s
√
Iν (2 ps) E U r e−sU ds.
p
Both sides may be infinite.
(b) If ν ≤ −1, then
E U r−ν−1 ep/U =
pn
E U r−ν−n−1
n!
0≤n≤−ν−1
∞ ν2
s
√
+
Iν (2 ps) −
p
0
0≤n≤−ν−1
pn sn+ν
E U r e−sU ds.
n!Γ(n + ν + 1)
Both sides may be infinite. If, moreover, −ν − 1 ∈ N, then
EU
r−ν−1 p/U
e
=
0≤n≤−ν−1
pn
E U r−ν−n−1 +
n!
0
∞
ν2
s
√
I−ν (2 ps)E U r e−sU ds.
p
(c) If ν > −1, E U r < ∞, E U r−ν−1 < ∞, then
EU
r−ν−1 −p/U
e
∞
=
0
ν2
s
√
Jν (2 ps)E U r e−sU ds.
p
Both sides are finite, and the integral on the right may be improper (that is, it may not
converge absolutely) if ν < −1/2.
24
23-7-2002
The integrated square-root process
(d) If ν ≤ −1, E U r < ∞, E U r−ν−1 < ∞, then
E U r−ν−1 e−p/U =
(−p)n
E U r−ν−n−1
n!
0≤n≤−ν−1
∞ ν2
s
√
+
Jν (2 ps) −
p
0
(−p)n sn+ν
E U r e−sU ds.
n!Γ(n + ν + 1)
0≤n≤−ν−1
Both sides are finite, and the integral on the right may be improper. If, moreover, −ν − 1 ∈
N, then
EU
r−ν−1 −p/U
e
=
0≤n≤−ν−1
(−p)n
E U r−ν−n−1 + (−1)ν
n!
∞
0
ν2
s
√
J−ν (2 ps)E U r e−sU ds.
p
Proof. (a) Because all the terms in the following series are positive, the equalities hold
whether the resulting expressions are finite or infinite:
EU
r−ν−1 p/U
e
∞
pn
E U r−ν−n−1
n!
n=0
∞
∞
pn
sn+ν
r
EU
e−sU ds
n!
Γ(ν + n + 1)
0
n=0
∞ ∞
sn+ν
pn
E U r e−sU ds
n!
Γ(ν
+
n
+
1)
0
n=0
∞ ν2
s
√
Iν (2 ps) E U r e−sU ds.
p
0
=
=
=
=
(b) The first formula results from
EU
r−ν−1 p/U
e
=
0≤n≤−ν−1
pn
E U r−ν−n−1 +
n!
0
∞
n>−ν−1
pn sn+ν
E U r e−sU ds.
n!Γ(ν + n + 1)
The second one then follows from (Lebedev, 1972, p.110)
1
= 0,
Γ(−m)
I−m (z) = Im (z),
m ∈ N.
(c) Let U0 = max(U, B) for B ≥ 0 (whence U0 = U ). For B > 0, 1/U0 is bounded below by
0, and above by 1/B, whence
E U r U0−ν−1 ep/U < ∞.
25
23-7-2002
The integrated square-root process
Therefore the series below converge absolutely, and so it is permitted to reverse the order
of summation, integration and expectation at will:
∞
(−p)n
sn+ν
=
E U r e−sU ds
n!
Γ(ν
+
n
+
1)
0
n=0
∞ ν2
s
√
Jν (2 ps)E U r e−sU ds.
=
p
0
E U r U0−ν−1 e−p/U
Now −ν − 1 < 0, so
∞
(5.4)
0 ≤ U r U0−ν−1 e−p/U ≤ CU r
where C = supx>0 xν+1 e−px < ∞, and U r is integrable by assumption. Thus
E U r U0−ν−1 e−p/U → E U r−ν−1 e−p/U
as B → 0+. Next, turn to the right hand side of (5.4).
Case 1: ν ≥ − 12
The asymptotic expression (Lebedev, 1972, p.122)
Jν (x) =
2
πx
12
cos x − (2ν + 1) π4 + O(x−3/2 )
as x → ∞
implies that there is C > 0 such that
|Jν (x)| ≤ Cx− 2 ,
1
Therefore
ν2
s
√
Jν (2 ps)E U r e−sU
p
Now ν ≥ − 12 implies
ν
2
+
∞
3
4
x > 0.
≤ C1 s 2 − 4 E U r e−sU .
ν
1
> 0, and so
s 2 − 4 E U r e−sU ds = E U r− 2 − 4 Γ
ν
1
ν
0
3
ν
2
+
3
4
.
This is finite, because r − ν − 1 ≤ r − ν2 − 34 ≤ r, and it is assumed that both E U r and
E U r−ν−1 are finite. Finally, the right hand side of (5.4) converges to
0
∞
ν2
s
√
Jν (2 ps)E U r e−sU ds
p
as B → 0+, by dominated convergence.
26
23-7-2002
The integrated square-root process
Case 2: ν ∈ (−1, − 12 ) The above arguments break down if ν < − 12 , because then
r−
3
ν
− < r − ν − 1.
2 4
The problem can be solved either by assuming that E U r− 2 − 4 < ∞ or else as follows.
1 ∞
Write the integral on the right hand side of (5.4) as 0 + 1 . It is immediately seen
that
1 ν2
1 ν2
s
s
√
√
r −sU
Jν (2 ps)E U e
ds →
Jν (2 ps)E U r e−sU ds
p
p
0
0
ν
3
as B → 0+, by dominated convergence. For the second integral, consider separately each
term on the right of
√
1
√
Jν (2 ps) = C2 s− 4 cos 2 ps − (2ν + 1) π4 + R(s)
where C2 > 0 and R(s) = O(s− 4 ). Since
3
∞
1
ν
2
−
3
4
as s → ∞
< −1, the integral
ν2
s
R(s)E U r e−sU ds
p
converges absolutely for B ≥ 0, and so convergence as B → 0+ to
∞
1
ν2
s
R(s)E U r e−sU ds
p
follows at once. The only remaining problem is to show that
∞
0
√
ν
1
s 2 − 4 cos 2 ps − (2ν + 1) π4 E U r e−sU ds
converges to the same expression with B = 0, as B → 0+. Perform the change of variable
√
u = 2 ps, s = u2 /(4p), to obtain
2
∞
√
2 p
u2
4p
ν2 − 14
= C3
u cos(u − (2ν + 1) π4 )E U r e−u
∞
= C3
1
√
u
ν+ 12
− C3
U /(4p)
uν+ 2 cos(u − (2ν + 1) π4 )E U r e−u
2 p
2
sin(u − (2ν +
∞
√
2 p
2
du
U /(4p)
2
1) π4 )E U r e−u U /(4p)
du
u→∞
√
u=2 p
2
1
sin(u − (2ν + 1) π4 )d uν+ 2 E U r e−u U /(4p) .
27
(5.5)
23-7-2002
The integrated square-root process
1
Now uν+ 2 tends to 0 as u tends to ∞, so the expression inside the curly brackets reduces
to
2
1
−uν+ 2 sin(u − (2ν + 1) π4 )E U r e−u U /(4p)
√ ,
u=2 p
which poses no problem. The integral in (5.5) converges absolutely, since
lim uν+ 2 E U r e−u
1
2
U /(4p)
u→∞
= 0.
To prove that it converges to
∞
ν+ 12
r −u2 U/(4p)
π
sin(u
−
(2ν
+
1)
)d
u
E
U
e
,
4
√
2 p
apply the Helly-Bray Theorem (Loève, 1977, p.184; Kolmogorov & Fomine, 1975, p.370).
The functions
2
1
B≥0
F0 (u) = uν+ 2 E U r e−u U /(4p) ,
are of bounded variation on R+ , and {F0 ; B > 0} converge completely (Loève, 1977, p.180)
to F0 as B → 0+. Since g(u) = sin(u − (2ν + 1) π4 ) is bounded, we conclude that
∞
∞
g dF0 →
g dF0
as B → 0+.
√
√
2 p
2 p
(d) Proceed as in (b) and (c) to get
E U r U0−ν−1 ep/U
=
0≤n≤−ν−1
+
0
∞
(−p)n
E U r U0−ν−n−1
n!
ν2
√
s
Jν (2 ps) −
p
0≤n≤−ν−1
n n+ν
(−p) s
E U r e−sU ds
n!Γ(ν + n + 1)
1 ∞
for any B > 0. The first sum poses no problem. Split the integral into 0 + 1 . The former
∞
tends to the right limit by dominated convergence. To prove convergence of 1 , consider
each term of the integrand separately. The first one converges to
∞ ν2
s
√
Jν (2 ps)E U r e−sU ds
p
1
by the arguments given in Case 2 of (c). Each of the other terms gives rise to an absolutely
convergent integral, since n + ν < −1, and dominated convergence does the rest.
Finally, in the special case −ν − 1 ∈ N, the sum inside the integral vanishes (as in
(b)), and one applies (Lebedev, 1972, p.103)
J−n (z) = (−1)n Jn (z),
28
n = 1, 2, . . .
23-7-2002
The integrated square-root process
Theorem 4.1(b) results from letting ν = −1 in Theorem 5.1(b). This theorem has
the following consequences, which relate the existence of E U −ν−1 ep/U , which is itself
determined by the behaviour of the distribution of U near the origin, to the behaviour of
E e−sU near infinity.
Corollary 5.2. Suppose p > 0, ν ∈ R and U > 0 a.s.
(a) If E U −ν−1 ep/U < ∞, then
lim inf s 2 − 4 e2
ν
1
√
ps
s→∞
E e−sU = 0.
√
(b) If ν > −1 and if, for some B > 0, E e−sU = O(s− 4 − 2 −0 e−2 ps ) as s → ∞,
then E U −ν−1 ep/U < ∞. The same holds for ν ≤ −1, with the additional assumption
E U −ν−1 < ∞.
3
ν
Proof. (a) First suppose ν > −1, and let r = 0 in Theorem 5.1(a). If the integral converges,
then necessarily the integrand must have a limit inferior equal to 0. By (4.3),
√
ν
ν
1
√
lim inf s 2 Iν (2 ps)E e−sU = C lim inf s 2 − 4 e2 ps E e−sU ,
s→∞
s→∞
where C > 0 is a constant. For ν ≤ −1, let r = 0 in Theorem 5.1(b). If the left hand side of
the first formula is finite, then the integral on the other side must be finite. The situation
is seen to be the same as when ν > −1, once it is observed that
∞
sn+ν E e−sU ds < ∞
1
for any n < ν + 1.
(b) Suppose ν > −1. The assumptions imply that there is a constant C such that
EU
−ν−1 p/U
e
1
=
∞ +
0
1
≤
0
=
0
1
1
s
p
ν2
s
p
ν2
√
Iν (2 ps) E U r e−sU ds
√
Iν (2 ps) E U r e−sU ds + C
∞
√
ν
3
ν
√
s 2 Iν (2 ps)s− 4 − 2 −0 e−2 ps ds
1
ν2
∞
√
1
s
√
√
r −sU
Iν (2 ps) E U e
ds + C
s 4 e−2 ps Iν (2 ps)s−1−0 ds
p
1
The result follows from (4.3). If ν ≤ −1 and, moreover, E U −ν−1 < ∞, then
E U −ν−n−1 < ∞
∞
∀ 0 ≤ n ≤ −ν − 1,
sn+ν E e−sU ds < ∞
∀ 0 ≤ n < −ν − 1,
1
29
23-7-2002
The integrated square-root process
and
∞
ν
√
s 2 Iν (2 ps)E e−sU ds < ∞.
1
This implies E U −ν−1 ep/U < ∞, by Theorem 5.1(b).
Example. Macdonald’s function is defined in terms of the modified Bessel function as
follows (Lebedev, 1972):
π I−ν (z) − Iν (z)
, | arg z| < π, ν ∈
/ Z,
2
sin(νπ)
Kn (z) = lim Kν (z), n ∈ Z.
Kν (z) =
ν→n
It also has the integral expression
1 z ν
Kν (z) =
2 2
∞
e−t−(z
2
/4t) −ν−1
t
dt,
| arg(z)| <
0
π
.
4
Clearly K−ν (z) = Kν (z). The generalized inverse Gaussian distribution with parameters
α > 0, β > 0, γ ∈ R has density
γ2
1
1
β
α
γ−1
√
x
exp −
αx +
1(0,∞) (x).
f (x) =
β
2
x
2Kγ ( αβ)
The notation for this distribution is GIG(α, β, γ). It is immediate that U ∼ GIG(α, β, γ)
implies (1/U ) ∼ GIG(β, α, −γ). Moreover, a simple calculation shows that
γ
r −sU
EU e
=
Kγ+r ( (α + 2s)β)
√
,
Kγ ( αβ)
r
α2β2
(α + 2s)
γ+r
2
α
Re(s) > − .
2
Theorem 5.1(c) then says that
∞
0
ν2
γ
r
Kγ+r ( (α + 2s)β)
α2β2
s
√
√
ds
Jν (2 ps)
γ+r
p
Kγ ( αβ)
(α + 2s) 2
γ
=
β− 2 α
(β + 2p)
1+ν−r
2
−γ+1+ν−r
2
K1+ν−r−γ ( (β + 2p)α)
√
Kγ ( αβ)
or, equivalently,
0
∞
ν2
Kγ+r ( (α + 2s)β)
s
√
Jν (2 ps)
ds
γ+r
p
(α + 2s) 2
=
β−
γ+r
2
α
(β + 2p)
30
1+ν−r−γ
2
−γ+1+ν−r
2
K1+ν−r−γ ( (β + 2p)α). (5.6)
23-7-2002
The integrated square-root process
√
By performing the change of variable x = 2 ps/b, and setting
α=
b2 y 2
,
2p
β=
2a2 p
,
b2
µ = γ + r,
it can be seen that (5.6) is equivalent to the formula (Lebedev, 1972, p. 134)
0
∞
µ−ν−1
√
ν
2 + b2
b
Kµ (a x2 + y 2 )
a
ν+1
J
(bx)x
dx
=
K
(y
a2 + b2 ).(5.7)
ν
µ−ν−1
aµ
y
(x2 + y 2 )µ/2
An extension of this result is obtained by applying part (d) of Theorem 5.1 to the same
distribution GIG(α, β, γ). With the same change of variable and substitutions, we get
n
ν+2n
2
2
Kµ (a x + y )
(−1) (bx/2)
xν+1 dx
Jν (bx) −
2
2
µ/2
n!Γ(n
+
ν
+
1)
(x
+
y
)
0
0≤n<−ν−1
√
µ−ν−1
bν
a2 + b2
bν
2
2
= µ
Kµ−ν−1 (y a + b ) − ν+1 µ−ν−1
a
y
a
y
∞
0≤n≤−ν−1
1
n!
b2 y
−
2a
n
Kµ−ν−n−1 (ay).
(5.8)
The Appendix shows another way of deriving the same formula.
6. Conclusion
This paper has shown how the moments of the integral of the square-root process may
be computed. Of the three methods shown, the the recursive method (Theorem 3.2) is the
fastest to execute, and its programming poses no problem. The paper also showed some
regularly properties of the density of the integral.
Further work will consider expressing the density of Yt as an infinite Laguerre series, as
was done for the density of the integral of geometric Brownian motion in Dufresne (2000).
The coefficients of the Laguerre polynomials are combinations of moments of the variable,
and this is where the recursive equations of Section 3 will be essential, as they generate the
moments much faster than differentiating the Laplace transform. A finite number of terms
of the series would then serve as an approximation for the density. The same approach may
be used for options on integrated squared volatility, or for options on average interest rates.
Alternative methods are inversion of the Laplace transform, and Monte Carlo simulation.
Acknowledgments
Financial support from the Actuarial Education and Research Fund (Society of Actuaries) is gratefully acknowledged.
31
23-7-2002
The integrated square-root process
References
Carr, P. (2001). Replicating variance swaps. Presented at the Risk Course on Correlation,
New York, May 22, 2001.
Courtadon, G. (1982). The pricing of options on default-free bonds. Journal of Financial
and Quantitative Analysis 17: 75-100.
Cox, J.C., Ingersoll, J.E., and Ross, S.A. (1985). A theory of the term structure of interest
rates. Econometrica 53: 385-407.
Dufresne, D. (1989). Weak convergence of random growth processes with applications to
insurance. Insurance: Mathematics and Economics 8: 187-201.
Dufresne (1990). The distribution of a perpetuity, with applications to risk theory and
pension funding. Scand. Actuarial. J. 1990: 39-79.
Dufresne (2000). Laguerre series for Asian and other options. Mathematical Finance 10:
407-428.
Feller, W. (1971). An Introduction to Probability Theory and Its Applications, Volume 2
(2th Edition). Wiley, Chichester.
Heston, S.L. (1993). A closed-form solution for options with stochastic volatility with
application to bond and currency options. Rev. Fin. Studies 6: 327-343.
Karatzas, I., and Shreve, S.E., 1998). Methods of Mathematical Finance. Springer-Verlag,
New York.
Kolmogorov, A.N., and Fomine, S.V. (1975). Introductory Real Analysis. Dover, New York.
Loève, M. (1977). Probability I (4th Edition). Springer-Verlag, New York.
Lebedev, N.N. (1972). Special Functions and their Applications. Dover, New York.
Oberhettinger, F., and Badii, L. (1973). Tables of Laplace Transforms. Springer-Verlag,
Berlin.
Revuz, D., and Yor, M. (1999). Continuous Martingales and Brownian Motion (Third
Edition). Springer-Verlag, New York.
32
23-7-2002
The integrated square-root process
APPENDIX
We give another proof of formula (5.8). First, we derive a slight extension of Weber’s
integral (Lebedev, 1972, p.132)
∞
2 2
bν
−b2 /4a2
e−a x Jν (bx)xν+1 dx =
e
,
a, b > 0, Re(ν) > −1.
(2a2 )ν+1
0
Leave a, b > 0, but now suppose ν ≤ −1 (the case of complex ν with Re(ν) ≤ −1 remains
to be explored). Consider
∞
(−1)n (bx/2)ν+2n ν+1
−a2 x2
e
dx
x
Jν (bx) −
n!Γ(ν + n + 1)
0
0≤n≤−ν−1
(−1)n (b/2)ν+2n ∞
2 2
=
e−a x x2ν+2n+1 dx
n!Γ(ν + n + 1) 0
n>−ν−1
∞
(−1)n (b/2)ν+2n 1
−2ν−2n−2
a
e−u uν+n du
=
n!Γ(ν
+
n
+
1)
2
0
n>−ν−1
(−1)n (b/2)ν+2n 1 −2ν−2n−2
a
n!
2
n>−ν−1
n
ν
2
2
2
b
b
1
e−b /4a −
.
=
−
(2a2 )ν+1
n!
4a2
=
0≤n≤−ν−1
From this formula, we obtain
∞
Kµ (a x2 + y 2 )
(−1)n (bx/2)ν+2n ν+1
dx
Jν (bx) −
x
n!Γ(n + ν + 1)
(x2 + y 2 )µ/2
0
0≤n<−ν−1
∞ ∞
(−1)n (bx/2)ν+2n ν+1 aµ
dt −t−a2 (x2 +y2 )/4t
=
dx Jν (bx) −
e
x
n!Γ(n + ν + 1)
2µ+1 0 tµ+1
0
0≤n≤−ν−1
∞
aµ
dt −t−a2 y2 /4t ∞
(−1)n (bx/2)ν+2n ν+1
−a2 x2 /4t
= µ+1
e
dx e
Jν (bx) −
x
2
tµ+1
n!Γ(n + ν + 1)
0
0
0≤n≤−ν−1
n ∞
b2 t
dt −t−a2 y2 /4t
bν
1
aµ
−b2 t/a2
− 2
= µ+1
e
−
e
2
tµ+1
(a2 /2t)ν+1
n!
a
0
0≤n≤−ν−1
∞
n ∞
b2
dt −t−a2 y2 /4t−b2 t/a2
1
dt
ν−µ µ−2ν−2 ν
−t−a2 y 2 /4t
−
a
b
e
−
e
= 2
tµ−ν
n!
a2
tµ−ν−n
0
0
0≤n≤−ν−1
√ 2
µ−ν−1
n
a + b2
1
bν
bν
b2 y
2
2
= µ
Kµ−ν−1 (y a + b ) − ν+1 µ−ν−1
Kµ−ν−n−1 (ay).
−
a
y
a
y
n!
2a
0≤n≤−ν−1
33
23-7-2002
© Copyright 2025 Paperzz