Fourier inversion formulas in option pricing
and insurance
Daniel Dufresne1 , Jose Garrido2 and Manuel Morales3
1
Centre for Actuarial Studies,
University of Melbourne, Australia
2
Department of Mathematics and Statistics,
Concordia University, Montreal, Canada
3
Department of Mathematics and Statistics,
Université de Montréal, Canada
December 21, 2005
Abstract
Several authors have used Fourier inversion to compute option
prices. In insurance, the expected value of max(S − K, 0) also arises
in excess-of-loss or stop-loss insurance, and similar techniques may
be used. Lewis (2001) used Parseval’s theorem to find formulas for
option prices in terms of the characteristic function of the log-price.
This paper aims at taking the same idea further: (1) formulas requiring weaker assumptions; (2) relationship with classical inversion
theorems; (3) formulas for payoffs which occur in insurance.
1
Introduction
Lewis (2001) gives formulas which price options without having first to find
the distribution of the underlying, by applying Parseval’s theorem. All that is
needed is the characteristic function (= Fourier transform) of the logarithm of
underlying and the Fourier transform of the payoff function. Fourier methods
are applied to option pricing in several other papers, for instance Bakshi &
1
Madan (2000), Carr & Madan (1999), Heston (1993), Lee (2004), Raible
(2000).
In insurance, the payoff
(S − K)+ = max(S − K, 0)
also occurs in excess-of-loss or stop-loss contracts, so Parseval’s theorem
might also be used to find the pure premiums. This paper explores the
computation of both option prices and insurance premiums via Parseval’s
theorem in a unified setting.
The mathematical problem is the same in insurance as in option pricing,
that is, the computation of E g(S) for some function g(·). The difference is
that in many cases option pricing models focus on the logarithm of S (the
“log-price”), while insurance applications are usually phrased in terms of the
distribution of S itself. For instance, the Black-Scholes formula for a call
option is the expectation of the payoff (S − K)+ , where log S has a normal
distribution; more recent models also specify the distribution of the log-price,
rather than S itself. The consequence is that the Fourier transform which is
likely to be known is that of X = log S. This explains the particular form of
the formulas in Lewis (2001).
By contrast, in insurance applications S is often (though not always) a
distribution for which the Fourier transform E exp(iuS) is known. This is why
we will identity two different classes of inversion formulas: (1) those where
the Fourier transform of log(S) is known (and thus appears in the inversion
formula), and (2) those where the Fourier transform of log(S) appears. The
first kind of inversion formula will be referred to as “Mellin-type”, since it is
the Mellin transform E exp(iu log(S)) = E S iu which is used, and the other
kind will be called “Fourier-type”. The formulas in Lewis (2001) are thus all
of Mellin type, while the insurance examples of Section 4 are all of Fourier
type. We do not suggest that this classification is essential, or that it neatly
differentiates option pricing from insurance (it does not), but we found it
useful in presenting a unified view of the applications of Parseval’s theorem
to option pricing and insurance.
Section 2 states the particular form of the Parseval theorem we will use,
and recalls two standard theorems of probability theory which are directly
related to the pricing formulas which follow. Section 3 gives the main results
of the paper. Lewis (2001) gives formulas which require the finiteness of
E S α for some α different from 0; this is good enough in many cases, but not
2
always feasible. We give general formulas which do not require this type of
assumptions (this is where our formulas are reminiscent of the classical inversion formulas for distribution functions). Section 4 gives some applications.
(N.B. This working paper does not contain all the numerical computations.)
The appendices contain some background on Fourier transforms and also the
longer proofs.
Notation. We denote FX (x) = P{X ≤ x} the distribution function of X.
µX is the measure on R induced by a random variable X, that is, µX (B) =
P{X ∈ B}, B a Borel subset of R. The Fourier transform of a function
f : R → C (when it exists) is denoted
fˆ(u) =
eiux f (x) dx.
R
The Fourier transform (often called the characteristic function) of the distribution of X is denoted
eiux µX (dx) = E eiuX .
µ̂X (u) =
R
2
2.1
Preliminaries
Parseval’s theorem
Theorem A.6 (Parseval’s theorem, see Appendix A) says that
∞
1
ĝ(−u)µ̂X (u) du
PV
E g(X) =
2π
−∞
if, among other things, the function g is integrable. This is not the case for
the functions
g1 (x) = (ex − K)+
or g2 (x) = (K − ex )+ ,
(2.1)
and Theorem A.6 cannot be applied directly. Lewis (2001) proposed to work
around this problem by using an exponential damping factor. For any function ϕ, let
ϕα (x) = eαx ϕ(x),
x ∈ R.
α .
The Fourier transform of ϕα will be denoted ϕ
3
If X has a probability density function fX , and if it happens that both
g and fXα are in L1 , then Theorem A.6 implies
∞
∞
1
−α
α
−α (−u)f
α (u) du.
g (x)fX (x) dx =
g
PV
E g(X) =
X
2π
−∞
−∞
−α
It turns out that the payoff functions g1 and g2 above are integrable when
multiplied by suitable exponential functions, and thus Parseval’s identity can
be used to compute option prices, provided that fXα is integrable.
Lewis (2001) assumed that X has a probability density function; we reformulate his idea by removing this asssumption, as it does not hold in all
applications. For a signed measure µ on R and α ∈ R, define a new signed
measure µα by
µα (dx) = eαx µ(dx).
Then the Fourier transform of µαX is
(iu+α)X
α
µ
= µ̂X (u − iα).
X (u) = E e
Theorem A.6 immediately yields the following result, which will be used
in the rest of this paper.
Theorem 2.1 Let X be a random variable, and suppose that for a particular
α ∈ R,
(a) E(eαX ) < ∞,
(b) g −α ∈ L1 ,
(c) the function y → E g(y + X) is continuous at the origin and satisfies
condition (b) of Theorem A.5.
Then
E g(X)
∞
∞
1
1
α
−α
=
ĝ(−u + iα)µ̂X (u − iα) du.
PV
PV
g (−u) µX (u) du =
2π
2π
−∞
−∞
(N.B. The function y → E g(y +X) is not always continuous, for instance,
consider X ≡ 1 and g(x) = I{x>1} .)
The following lemmas give sufficient conditions for condition (b) of Theorem A.5 to hold.
4
Lemma 2.1 Condition (b2) of Theorem A.6 is satisfied if g has bounded
variation.
Proof. Define G(y) = E g(y + X) and let V g be the variation of g(·) over R.
If {yj } is an increasing sequence. Then
|∆G(yj )| ≤
|g(yj + x) − g(yj−1 + x)| dµX (x)
j
≤
j
V g dµX (x) = V g < ∞.
Lemma 2.2 (a) Suppose that there are a1 < a2 < · · · < an such that
(i) P{X = aj } = 0 for j = 1, . . . , n,
(ii) g(·) is uniformly bounded and piecewise continuous over (−∞, a1 ), (a1 , a2 ),
· · · , (an−1 , an ), (an , ∞), and has finite limits g(aj −), g(aj +).
Then G(y) = E g(y + X) is continuous at y = 0.
(b) If E X+ < ∞, then G(y) = E(y + X − K)+ is continuous at y = 0.
Proof. (a) Write
E g(y + X) =
n+1
E[g(y + X)Ik (X)],
k=1
where {Ik (·)} are the indicator functions of the intervals (−∞, a1 ), (a1 , a2 ),
. . . , (an−1 , an ), (an , ∞). Then, as y → 0,
g(y + X(ω))Ik (X(ω)) → g(X(ω))Ik (X(ω)),
for all ω. Since g is uniformly bounded,
E[g(y + X) Ik (X)] → E[g(X(ω)) Ik (X)],
by dominated convergence, which yields the result.
To prove part (b), observe that (y + S − K)+ ≤ S+ + (y − K)+ and the
result follows by dominated convergence.
5
2.2
Two classical theorems
We present two standard theorems which are intimately related to the option
or stop-loss formulas which follow. Each expresses the distribution function
of a random variable as a Fourier inversion integral. The best known proofs
of these results (see Lucaks, 1970, p.31, and Kendall & Stuart, 1977, p.97)
rely on Dirichlet integrals, but Appendix B gives proofs based on Parseval’s
theorem.
Theorem 2.2 Ig a and a + h are continuity points of FX (·), then
∞
1
1 − e−iuh −iua
PV
e
ν̂X (u) du.
FX (a + h) − FX (a) =
2π
iu
−∞
Theorem 2.3 If FX (·) is continuous at x = b, then
∞
1 iub
1
1
+
PV
[e ν̂X (−u) − e−iub ν̂X (u)] du.
FX (b) =
2 2π
iu
0
In option pricing, Theorem 2.3 leads to the well-known formula
E(eX − K)+ = E(eX )Π1 − KΠ2 ,
where
Π1
1 1
= E eX 1{eX >K} /E(eX ) =
+
2 π
∞
0
K −iu ν̂X (u − i)
Re
du
iuν̂X (−i)
Π2 = P{e > K}.
X
2.3
Mellin-type and Fourier-type formulas
Lewis (2001) considers payoffs which are explicit functions of eX , such as the
usual call and put payoffs g1 and g2 in (2.1). This is because most financial
models are expressed in terms of the log-price. For instance, a formula for
E(S − K)+ is obtained in terms of
Eeiu log S = ES iu .
6
(2.2)
The insurance applications considered in Section 4, however, lead to expressions of the type E(S − K)+ , but the inversion formulas are in terms of the
Fourier transform E(eiuS ).
The expression in Eq.(2.2) is known as the Mellin transform of the distribution of S. In order to distinguish these two situations, we will call
“Mellin-type” the formulas where E S iu appears, and “Fourier-type” those
where E(eiuS ) appears.
3
Inversion formulas
In this section, formulas are derived for the expectations of the payoffs g1
and g2 in (2.1). In each case, Parseval’s theorem yields an inversion integral
along the line u−iα in the complex plane, if α can be found such that (i) g −α
is in L1 and (ii) E exp(αX) is finite. It is not always possible to find such α,
depending on the function g considered and also the distribution of X. For
this reason, we derive general formulas which do not assume that such α = 0
exists. In each case the approach is the same: truncate the distribution of
X in such a way that Parseval’s theorem applies for some α = 0; next, let
α tend to 0, and, finally, remove the truncation of the distribution of X.
The similarity of these general “no-α” formulas with the classical theorems
of Section 2 will be noted.
An important point to keep in mind in what follows is that if there is
α > 0 such that E exp(αX) < ∞, then necessarily E exp(α X) < ∞ for
0 < α < α (the same applies for α < 0). The set of α such that g −α ∈ L1 ,
when it exists, is either an interval or a single point. Hence, the set of α
such that both E exp(αX) < ∞ and g −α ∈ L1 is either empty or an interval
(possibly reduced to a single point). This has numerical implications, since
the observed accuracy of the integral formula often varies with α within the
allowed interval.
3.1
Mellin-type formulas
The proof of the next theorem can be found in Appendix B.
Theorem 3.1 Let S ≥ 0, K > 0 and
h(u) =
K −iu+1
E(S iu ).
iu(iu − 1)
7
(a) If there exists α < 0 such that E(S α ) < ∞, then
∞
1
E(K − S)+ = KP{S = 0} +
h(u − iα) du.
PV
2π
−∞
If, moreover, E(S) < ∞, then
1
PV
= E S − K[1 − P{S = 0}] +
2π
E(S − K)+
∞
−∞
h(u − iα) du.
(b) In all cases,
E(K − S)+
K
1
=
[1 + P{S = 0}] + P V
2
π
∞
Re[h(u)] du.
0
If E(S) < ∞,
E(S − K)+
3.1.1
K
1
= E S − [1 − P{S = 0}] + P V
2
π
∞
Re[h(u)] du.
0
Confirmation in the case where S has a discrete distribution
Suppose that X ≡ x0 . This means that µ̂X (u) = eiux0 , and
h(u) = −
Then
ec eiu(x0 −c)
.
iu(1 − iu)
ec cos(uβ) ec sin(uβ)
Re[h(u)] = −
,
−
1 + u2
u(1 + u2 )
Now
1
π
0
∞
1
cos(uβ)
du =
2
1+u
2
∞
−∞
β = x0 − c.
eiuβ
1
du = e−|β|
2
π(1 + u )
2
1
2
(this is times the characteristic function of the Cauchy distribution). Also,
letting sign β = I{x>0} − I{x<0} ,
0
∞
sin(uβ)
du =
u(1 + u2 )
∞
0
=
0
β
0
β
cos(uy)
dy du
1 + u2
1 −|y|
1
dy = (sign β) (1 − e−|β| ).
e
2
2
8
Hence,
ec 1
+
2
π
∞
ec ec −|β|
+ (sign β)(1 − e−|β| )]
− [e
2
2
ec
[1 − (sign β)](1 − e−|β| ) = (ec − ex0 )+ .
=
2
Re[h(u)] du =
0
Since a discrete distribution is a combination of degenerate distributions, this
shows that the formula is correct for discrete random variables S > 0. It is
true for any S ≥ 0, because if P{S = 0} = 1, then
E(K − S)+ = K,
while
3.2
1
K+
2π
0
∞
iu
ec iuc E(0−iu )
−iuc E(0 )
−e
e
du = K.
iu
1 + iu
1 − iu
Fourier-type formulas
We now look at formulas for the payoffs
g3 (x) = (x − K)+ ,
g4 (x) = (K − x)+ I{0≤x≤K}
in terms of µ̂X (u) = E(eiuX ). Exponential damping factors eαx may be used
just as in the previous section, but we show that one may also use polynomial
damping factors.
3.2.1
Exponential damping factors
Theorem 3.2 Let X ≥ 0.
(a) If there exists α > 0 such that E(eαX ) < ∞, then E(X − K)+ < ∞ for
K ∈ R and
∞
1
PV
E(X − K)+ =
ĝ3 (−u + iα)µ̂X (u − iα) du,
2π
−∞
where
ĝ3 (z) = −
9
eizK
.
z2
(b) For any α ∈ R such that E(eαX ) < ∞ (including α = 0) and K ≥ 0,
∞
1
E(K − X)+ =
ĝ4 (−u + iα)µ̂X (u − iα) du,
PV
2π
−∞
where
ĝ4 (z) =
1
(1 + izK − eizK ).
2
z
Proof. Part (a) is a direct application of Parseval’s theorem and Lemma
2.2, given that if Im(z) > 0, then, for any K,
∞
∞
eizK
izx
izK
ĝ3 (z) =
(x − K)e dx = e
yeizy dy = − 2 .
z
K
0
For part (b), it is clear that g4−α I[0,K] ∈ L1 for any α ∈ R; also, g4 is continuous
everywhere except at x = 0. Provided µ̂X (−iα) < ∞, we can thus apply
Parseval’s theorem and Lemma 2.2, with
∞
1
ĝ4 (z) =
(g3 (x) + K − x)eizx dx = 2 (1 + izK − eizK ).
z
0
In Section 3.2.3, a variation on part (a) of this theorem is given which
does not require that E(eαX ) < ∞ for any α > 0.
3.2.2
Polynomial damping factors
An alternative to the formulas in Theorem 3.2 is to use a polynomial damping
factor. For β ∈ {1, 2, . . . } and c > 0, let
g [−β] (x) = (1 + cx)−β g(x),
[β]
dµX (x) = (1 + cx)β dµX (x).
For the payoff g3 (x) = (x − K)+ , given β ≥ 2,
iux
eiuK
e (x − K)+
[−β]
dx
=
Ψ(2, 3 − β, −iu(1 + cK)/c),
g3 (u) =
(1 + cx)β
c2 (1 + cK)β−2
R
(3.1)
where
1
Ψ(α, γ, z) =
Γ(α)
0
∞
e−zt tα−1
dt,
(1 + t)α−γ+1
10
α > 0,
is the confluent hypergeometric function of the second kind. (The integral
formula above holds (i) for Re(z) > 0 and also (ii) for Re(z) = 0, Im(z) = 0
if γ ≤ 1; for more details, see Lebedev, 1972, Chapter 9).
The function Ψ above may be expressed in terms of the incomplete gamma
function; since
Ψ(2, 3 − β, z) = Ψ(1, 3 − β, z) − Ψ(1, 2 − β, z),
the formula in (3.1) may be written in terms of
Ψ(1, γ, z) = z 1−γ ez Γ(γ − 1, z),
where
∞
Γ(a, z0 ) =
xa−1 e−x dx,
(3.2)
| arg(z0 )| < π.
z0
Because β is a positive integer, an alternative is to use integration by
parts to show that, if n = 0, 1, 2, 3, . . . ,
Ψ(1, −n, z) =
n
j=0
(−z)n+1
(−z)j
+
Ψ(1, 1, z),
(n + 1 − j)j+1 (n + 1)!
Re(z) ≥ 0.
The remaining hypergeometric function Ψ(1, 1, z) may in turn be written as
an incomplete gamma function (see (3.2)), or else as
∞ −t
e
z
Ψ(1, 1, z) = −e E1 (−z),
E1 (z) = −
dt,
| arg(−z)| < π.
t
−z
Here E1 (·) is the exponential integral function. For more details on the special
functions above, see Abramowitz and Stegun (1970) or Lebedev (1972).
For β ≥ 2, the Fourier transform of
[−β]
g4
(x) = (1 + cx)−β (K − x)+
may be found in the obvious way: since
[−β]
g4
we get
[−β]
(x) = g3
(x) + (1 + cx)−β (K − x),
∞
eiux (K − x)
dx
(1 + cx)β
0
1
K
1
[−β]
= g3 (u) +
+ 2 Ψ(1, 2 − β, −iu/c) − 2 Ψ(1, 3 − β, −iu/c).
c
c
c
[−β]
[−β]
g
4 (u) = g3 (u) +
11
The case β = 1 is different:
1 iuK
cK + 1 [−1]
g
(e −1).
Ψ(1, 1, −iu/c) − eiuK Ψ(1, 1, −iu(1 + cK)/c) −
4 (u) =
2
c
iuc
[β]
As to the Fourier transform of µX (x),
[β]
µ
X (u) =
β β
k=0
k
c
k
k iux
x e
R
dµX (x) =
β β
k=0
Recall that if E X k < ∞, then
∂k
µ̂ (u)
∂uk X
k
(−ci)k
∂k
µ̂X (u).
∂uk
exists for all u ∈ R.
Theorem 3.3 Let X ≥ 0, c > 0.
(a) If E X β < ∞ and β ∈ {2, 3, . . . },
E(X − K)+
1
=
PV
2π
∞
−∞
[−β]
g
3 (u)
k
∂
β k (−ci)k k µ̂X (u) du.
∂u
k
β β
k=0
(b) If P{X = 0} = 0, E X β < ∞, c > 0 and β ∈ {0, 1, 2, 3, . . . },
β ∞
k
1
∂
β
[−β]
g
E(K − X)+ =
β k (−ci)k k µ̂X (u) du.
PV
4 (u)
k
2π
∂u
−∞
k=0
3.2.3
Ladder height distributions
In this section, we show how an inversion formula can be found for E(X −K)+
as a direct application of Theorem 2.3. We know that, for any X ≥ 0,
∞
E(X − K)+ =
P(X > y) dy.
K
Define a new distribution (sometimes called the “ladder height” distribution associated with X) with density
fX (x) =
P(X > x)
1{x>0} .
EX
Then
E(X − K)+ = (E X) P(X > K).
12
Theorem 3.4 If X ≥ 0, E(X) < ∞ and K ≥ 0,
−iuK
∞
1
(µ̂X (u) − 1)
EX
e
E(X − K)+ =
+ PV
Re
du
2
π
(iu)2
0
∞ −iuK
e
[µ̂X (u) − 1 − iuE(X)]
1
du.
PV
=
2π
(iu)2
−∞
(3.3)
(3.4)
Proof. An easy calculation yields
µ̂X (u) − 1
.
iuE X
µ̂X (u) =
Theorem 2.3 says that
1 1
+
P{X > K} =
2 π
∞
0
e−iuK µ̂X (u)
Re
du,
iu
which implies (3.3).
Next, apply the same idea to X, to get
f x (x) =
E(X − x)+
P(X > x)
1{x>0} =
E(X 2 )/2
EX
µ̂ x (u) =
µ̂X (u) − 1
µ̂ (u) − 1 − iuE X
= X
iuE X
(iu)2 E(X 2 )/2
(this requires E X 2 < ∞). The function f x (·) is integrable and differentiable;
therefore, apply Theorem A.5 to get (3.4). To remove the assumption that
E(X 2 ) < ∞, truncate the distribution of X by defining X a = X ∧ a; formula
(3.4) holds for X a . Next, let a tend to infinity. The integral is unchanged,
a
because E(eiuX ) tends to E(eiuX ) uniformly in u.
The above formulas are similar to the one in part (a) of Theorem 3.2. The
main difference is that Theorem 3.2(a) requires the additional assumption
that E(eαX ) < ∞ for some α > 0. Note that even though all three formulas
give the same result for K ≥ 0, for K < 0 they are different: Theorem 3.2(a)
gives E(X) − K, formula (3.3) yields E(X), and formula (3.4) yields 0.
It is interesting to reconcile the preceding results with Theorem 3.2. To
13
get (3.3), assume Theorem 3.2 holds, and write
M
ĝ3 (−u + iα)µ̂X (u − iα) du
−M
M −iα −izK
e
=−
µ̂X (z) dz
z2
−M −iα
M −iα −izK
M −iα −izK
e
e
=−
(µ̂X (z) − 1) dz −
dz.
2
z
z2
−M −iα
−M −iα
The change of variable z = M w shows that the last integral tends to 0 as
M → ∞. The path of integration in the remaining integral can be pushed
up to the real axis, yielding (3.3) when M tends to infinity (the pole at the
origin leaves πE(X)). To get (3.4), proceed similarly:
e−izK
e−izK
e−izK
µ̂
(z)
=
(µ̂
(z)−1−izE(X))+
(1+izE(X)) = (A)+(B).
X
X
z2
z2
z2
The integral of (A) tends to (3.4) as the path of integration is moved up to
the real axis (with no residue), while the integral of (B) over the segment
(−M − iα, M − iα) tends to 0 as M tends to infinity.
It is possible to iterate the procedure which led to (3.4). Each iteration
yields a new inversion formula of the form
E(X − K)+
1
=
2π
∞
−∞
e−iuK [µ̂X (u) − 1 − iuE(X) − · · · − (iu)n E(X n )/n!]
du
(iu)2
for K > 0 and n = 2, 3, . . . , but it is unclear whether there is any benefit in
using these formulas. (Observe that these are formulas for the higher order
derivatives of the ladder height densities, and that they vanish for K < 0.
Numerical problems may be experienced for K close to 0, since for n ≥ 2 the
theoretical value of the inversion integral involves the Dirac function and its
derivatives at K = 0.)
4
Examples
In the first two examples (compound Poisson/exponential, generalized Pareto)
there are closed form expressions for the expected payoffs as well as for Fourier
14
and Mellin transforms. It is therefore possible to test the inversion formulas derived above against the exact expected payoffs. In the other examples
(compound Poisson/Pareto, compound Poisson/Pareto plus α-stable), there
are no closed form expressions for the expected payoffs we consider, and simulation is used to assess the performance of the Fourier inversion formulas.
(N.B. The numerical illustrations are incomplete.)
4.1
Compound Poisson/Exponential distribution
In this example, the explicit distribution is known, as well as both the Fourier
and Mellin transforms. We will show that this distribution is intimately
related to the hypergeometric functions 0 F1 and 1 F1 . Recall that
0 F1 (c, z)
=
∞
zm
,
m!(c)m
m=0
and
1 F1 (a, c; z)
=
∞
(a)k z k
k=0
(c)k k!
z ∈ C, −c ∈
/ N,
, z ∈ C,
−c ∈
/ N.
The latter is known as the confluent hypergeometric function of the first
kind. It is known that (Lebedev, 1972, p.267)
e−z 1 F1 (a, c; z) = 1 F1 (c − a, c; −z).
Suppose that
S =
N
Xk ,
Xk ∼ exp(1),
N ∼ Poisson(λ).
k=1
First, the characteristic function of S is
1
iuS
E(e ) = exp λ
−1
= 1 F1 (1, 1, λiu/(1 − iu)).
1 − iu
Next, the compound Poisson/exponential distribution is of mixed type, but
we may calculate derivative of the distribtution function explicitly for x > 0 :
∞
∂
e−λ λn xn−1 e−x
P{S ≤ x} =
∂x
n! (n − 1)!
n=1
= λe−λ−x
∞
(λx)m
= λe−λ−x 0 F1 (2, λx).
m!(m
+
1)!
m=0
15
Other authors have expressed this in terms of Bessel functions (probably because the name “Bessel” sounds better than “F01”), but one might argue that
hypergeometric functions are more natural here. Next, turn to expectations
of payoffs g3 and g4 : if K > 0,
∞
−λ
e−x 0 F1 (2, λx) dx
E(X − K)+ = λe
K
K
−λ
−λ
E(K − X)+ = Ke + λe
e−x 0 F1 (2, λx) dx.
0
Finally, the Mellin transform of S is, for r > 0,
E(S ) =
r
=
∞
λn e−λ
n=1
∞
n=1
n!
λn e−λ
n!
= λe−λ
= λe
−λ
E(X1 + · · · + Xn )r
0
∞
xr+n−1 e−x
dx
Γ(n)
∞
λm Γ(m + 1 + r)
(m + 1)!
m!
m=0
∞
Γ(m + 1 + r) Γ(2) λm
Γ(1 + r)
Γ(1 + r) Γ(m + 2) m!
m=0
= λe−λ Γ(1 + r) 1 F1 (1 + r, 2; λ).
(4.1)
We can thus write
E(S r ) = λΓ(1 + r) 1 F1 (1 − r, 2; −λ).
(4.2)
Observe that the integral moments E(S k ), k = 1, 2, . . . , form an infinite series
in (4.1), but that they are a finite one in (4.2): if k = 0, 1, . . . ,
E(S ) = λΓ(1 + k) 1 F1 (1 − k, 2; −λ) = λk!
k
k−1
(1 − k)j (−λ)j
j=0
(j + 1)!
j!
.
We computed E(1 − S)+ , λ = 1, by conditioning on N and also with the
Mellin inversion formula. The latter was quicker, the results identical.
16
4.2
Generalized Pareto
In this case there are closed form expressions, in terms of special functions,
for the expected payoffs E(K − S)+ and E(S − K)+ , as well as for the Fourier
and Mellin transforms of S.
For a > 0, let
y
B(a, b; y) =
xa−1 (1 − x)b−1 dx.
0
This is the incomplete beta function. For a, b > 0, B(a, b) = B(a, b; 1) is the
beta function.
If α, β > 0, we write Y ∼Generalized Pareto(α, β) if the density function of Y is
xβ−1
1
fY (x) =
I{x>0} .
B(α, β) (1 + x)α+β
(The usual Pareto(α) is thus Generalized Pareto(α, 1).) Then
B(α, β)E(K − Y )+
K
=
(K − x)
0
xβ−1
dx
(1 + x)α+β
1
y
= K
α−1
(1 − y)
β−1
dy −
1/(K+1)
u
β−1
0
= KB β, α;
y α−2 (1 − y)β dy
1/(K+1)
K/(K+1)
= K
1
K
K+1
(1 − u)
α−1
K/(K+1)
du −
0
− B β + 1, α − 1;
K
K+1
uβ (1 − u)α−2 du
.
Similarly,
B(α, β)E(Y − K)+
1
1
= (K + 1)B α − 1, β + 1, K+1
− KB α − 1, β, K+1
.
The Mellin transform of the Generalized Pareto(α, β) distribution is
E(Y r ) =
B(α − r, β + r)
Γ(α − r)Γ(β + r)
=
,
B(α, β)
Γ(α)Γ(β)
17
while its Fourier transform is
∞ iux β−1
e x
1
Γ(α + β)
iuY
dx =
Ψ(β, 1 − α; −iu).
E(e ) =
α+β
B(α, β) 0 (1 + x)
Γ(α)
This characteristic function may be expressed in terms of the expontial integral function or the incomplete gamma functions (see Section 3.2.2).
As an illustration, we compute the excess-of-loss (XL) premium E(X −
K)+ if X ∼ Pareto(α), using a polynomial damping factor. Let β = 3 and
α = 5. Then the XL payoff function and Pareto claim severity density have
the following Fourier transform, for u ∈ R:
iuK
iu
eiuK
e
−iu
[−3]
g
(u) = 1 − (K + 1)
+ iue E1 (−iu(K + 1)) −
2
(1 + K)
2(1 + K)
5
[3]
ν
[1 + iu − u2 e−iu E1 (−iu)].
X (u) =
2
∞
[3]
[−3]
(u)ν
Now the XL premium E(X − K)+ = −∞ g
X (u) du is obtained by
evaluating the integral numerically. This can easily be done in Excel (see
next section for further details).
4.3
Compound Poisson/Pareto
In this example and the next one there is no explicit formula for stop-loss
premiums. Our Fourier inversion formulas will be compared to simulation
results.
Consider a stop–loss (SL) premium on aggregate claims. Here different
i.i.d. random variables Xj , with a common distribution, represent individual
claim severities. These are summed to form a compound Poisson distribution:
S =
N
Xj ,
(4.3)
j=1
where N ∼ Poisson(λ). N is assumed independent of the individual claim
severities {Xj }. The compound Poisson variable S represents the aggregate
claims over some time interval.
Suppose we want to compute an SL premium for the aggregate claims
variable S:
∞
E(S − K)+ =
g(x) dµS (x),
0
18
where µS is the distribution of S and g(x) = (x − K)+ . As before, this can
be written as
∞
[β]
g [−β] (x) dµX (x),
E(S − K)+ =
0
−β
[−β]
[β]
where g (x) = (1 + x) g(x) and µX (dx) = (1 + x)β µX (dx).
By Theorem 3.3, the SL premium is also equal to
∞
1
[β]
[−β]
E(S − K)+ =
g
(−u)µ
S (u) du.
2π −∞
(4.4)
We need to calculate
[β]
µ
S (u) =
β β
j=0
j
(−i)j
dj λ[µ̂X (u)−1]
e
,
duj
u ∈ R.
(4.5)
(j)
The derivatives ν̂X take the form
∞
α(ix)j
(j)
eiux
dx,
ν̂X (u) =
(1 + x)α+1
0
u ∈ R,
j = 0, 1, . . . , α.
Using integration by parts, they can be written in terms of the exponential
[−3]
integral function as in Section 3.2.2. The same section gives g
, while, for
u ∈ R,
[3]
µ
S (u) = µ̂S (u) − 3iµ̂S (u) − 3µ̂S (u) + iµ̂S (u)
= eλ[µ̂X (u)−1] 1 − 3iλµ̂X (u) − 3[λµ̂X (u) + λ2 (µ̂X (u))2 ]
2 3
3
+i[λµ̂
X (u) + 3λ µ̂X (u)µ̂X (u) + λ (µ̂X (u)) ] .
Table 1 lists the results for different values of the Poisson parameter λ
and the retention limit K. These are compared with simulated SL premiums
based on 100, 000, 000 replicated samples.
Table 2 lists additional SL premiums, also computed from (4.4), by letting
vary the Poisson parameter λ and the Pareto parameter α such that the
expected value of S is always 1. Again, these are compared with the simulated
SL premiums obtained from 100, 000, 000 replicated samples.
These numbers were obtained with Romberg’s numerical integration method,
coded in Visual Basic and implemented in Excel. The program is can be
downloaded at http://www.mathstat.concordia.ca .
19
Table 1: SL premiums - compound Poisson/Pareto
λ
1
2
3
K Simulated Integral Simulated Integral Simulated Integral
0.25
0.1364
0.1342
0.3224
0.3215
1.0096
1.0095
0.5
0.0281
0.0283
0.2068
0.2063
0.7938
0.7937
1
0.0776
0.0764
0.0863
0.0862
0.4617
0.4614
Table 2: SL premiums - compound Poisson/Pareto
λ
K
1
2
3
4.4
1
2
3
Simulated Integral Simulated Integral Simulated Integral
0.305
0.3049
0.2703
0.2701
0.2448
0.2446
0.079
0.07894
0.0526
0.05226
0.036
0.0358
0.0213
0.02126
0.0098
0.00971
0.0046
0.00453
Compound Poisson/Pareto plus α–stable
Now let us consider the SL premium E(Z − K)+ for a more general aggregate
claims distribution,
Z = S + J,
where S is compound Poisson
S =
N
Yj ,
j=1
the claims Yj having density
fY (y) =
α
I{y>1} .
y 1+α
(With our previous notation, this means Yj − 1 ∼ Pareto(α)). Let J have
an α-stable distribution with Lévy measure
νZ (dy) =
λ
I(0,∞) (y)dy.
y 1+α
20
Furrer (1998) considered a risk model under which aggregate claims had the
same distribution as S + J.
It is known that
µ̂Z (u) = e−ΨZ (u) ,
where ΨZ is the Lévy–Khintchine exponent of Z, given by
∞
ΨZ (u) =
1 − eiux + iux µZ (dx) = σ α |u|α [1 − isign(u) tan(απ/2)],
0
σ being the variance of Z (σ is a function of λ, see Furrer (1998)).
References
[1] Abramowitz, M., and Stegun, I. (1970). Handbook of Mathematical Functions: With Formulas, Graphs and Mathematical Tables. Dover.
[2] Apostol, T.M. (1974). Mathematical Analysis, Second Edition. AddisonWesley, Reading, Mass.
[3] Bakshi, G., and Madan, D.B. (2000). Spanning and Derivative-security
Valuation. J. of Financial Economics 55: 205-238.
[4] Carr, P., and Madan, D.B. (1999). Option valuation using the fast Fourier
transform. J. Computational Finance 2: 61–73.
[5] Furrer, H.J. (1998). Risk processes perturbed by a α–stable Lévy motion.
Scand. Actuarial Journal 1998: 59–74.
[6] Heston, S.L. (1993). A closed-form solution for options with stochastic
volatility with application to bond and currency options. Rev. Fin. Studies
6: 327-343.
[7] Kendall, M., and Stuart, A. (1977). The Advanced Theory of Statistics,
Fourth Edition. Griffin, London.
[8] Lebedev, N.N. (1972). Special Functions and their Applications. Dover,
New York.
[9] Lee, R.W. (2004). Option pricing by transform methods: extensions, unification, and error control. Journal of Computational Finance 7:51-86.
21
[10] Lewis, A.L. (2001). A simple option formula for general jump–diffusion
and other exponential Lévy processes. Unpublished. OptionCity.net publications: http://optioncity.net/pubs/ExpLevy.pdf.
[11] Lukacs, E. (1970). Characteristic Functions, Fourth Edition. Griffin,
London.
[12] Malliavin, P. (1995). Integration and Probability. Springer Verlag, New
York.
[13] Raible, S. (2000). Lévy Processes in Finance: Theory, Numerics, and
Empirical Facts. PhD Dissertation, Faculty of Mathematics, University of
Freiburg, Germany.
[14] Rudin, W. (1987). Real and Complex Analysis, Third Edition. McGraw–
Hill, New York.
[15] Titchmarsh, E.C. (1975). Introduction to the Theory of Fourier Integrals.
Oxford University Press.
A
The Fourier transform
In this appendix we state some essential definitions and results related to
Fourier transforms. For 1 ≤ p < ∞ let Lp denote the space of measurable
functions h : R → C such that R |h(x)|p dx < ∞; the space L∞ consists in
the functions from R to C which are essentially bounded. Let µ be a signed
measure on (B(R), R) with |µ| < ∞, and define its Fourier transform as
∞
µ̂(u) =
eiux µ(dx).
−∞
Since |eiux | = 1, this is an ordinary (that is, proper) Lebesgue integral. If
h ∈ L1 , then the Fourier transform of h is defined as
∞
ĥ(u) =
eiux h(x) dx,
u ∈ R.
−∞
Once again, because h is assumed integrable, the above is an ordinary integral. The convolution of a real function h with a signed measure µ is defined
as
∞
h(x − y) µ(dy),
(τµ h)(x) =
−∞
22
(when the integral exists), and the convolution of two real functions h, k as
∞
h(x − y)k(y) dy,
(h ∗ k)(x) =
−∞
(when the integral exists). It is known (Malliavin, 1995, p.114) that
Theorem A.1 If µ is a signed measure with |µ| < ∞, and h ∈ Lp (for
some 1 ≤ p ≤ ∞), then h(x − y)µ(dy) converges almost everywhere and
τ µ h ∈ Lp .
A consequence is that if h, k ∈ L1 , then h(x − y)k(y)dy converges almost
everywhere and h ∗ k ∈ L1 . For the next result, see Malliavin (1995, p.107).
Theorem A.2 If µ is a signed measure with |µ| < ∞ and h ∈ L1 , then
τ
µ h = ĥµ̂.
∗ k = ĥ k̂.
A consequence is that if h, k ∈ L1 , then h
Theorem A.3 (Malliavin, 1995, pp.130, 134)
(a) If h, ĥ ∈ L1 , then
1
h(x) =
2π
∞
e−iux ĥ(u) du
−∞
for almost all x ∈ R,
and h, ĥ ∈ Lp for all 1 ≤ p ≤ ∞. The right–hand side is a continuous
function of x, and so the above identity holds for all x if h is continuous.
(b) If h, ĥ ∈ L1 and µ is a signed measure with |µ| < ∞, then Parseval’s
identity holds:
∞
∞
1
ĥ(u)µ̂(−u) du.
h(x) µ(dx) =
2π −∞
−∞
This is a fairly restrictive result. For instance, the Fourier transform of the
normal density function with mean m and variance s2 is
ĥ1 (u) = eium−s
23
2 u2 /2
,
which is in L1 ; as Theorem A.3 says, both h1 and ĥ1 are uniformly bounded
almost everywhere. The same holds for the double exponential density. However, the Fourier transform of the exponential density function with mean
1/λ is
λ
ĥ2 (u) =
,
λ − iu
which is not in L1 , and Theorem A.3 does not apply. A less restrictive
inversion theorem is thus required for the applications considered in this
paper.
One extension of Theorem A.3 is Plancherel’s Theorem (see Malliavin,
1995, p.132, and Rudin, 1987, p.186), which applies to square-integrable
functions. (We do not need Plancherel’s Theorem in this paper, but other
authors have applied it to derivative pricing.)
Theorem A.4 The Fourier transform has an extension to functions in L2
which satisfies:
(a) For every h ∈ L2 , h2 = (2π)−1 ĥ2 .
(b) If
φM (u) =
M
ixu
h(x)e
1
ψM (u) =
2π
dx,
−M
M
ĥ(u)e−ixu du,
−M
then φM − ĥ → 0 and ψM − h → 0 as M → ∞.
For a function h which is in L2 but not in L1 , we see that the (extension
of) the Fourier transform is the limit of φM (u) as M tends to infinity. This
type of integral, which may exist even though the integrand is not absolutely
integrable over R, is called a “principal value” integral (or “Cauchy principal
value”). We will use the notation
∞
M
PV
k(x) dx = lim
k(x) dx,
M →∞
−∞
−M
when the limit exists.
The inversion theorem needed for our results is the following. It only
requires that h ∈ L1 .
Theorem A.5 (Apostol, 1974, p.324) Suppose h is a real function which
satisfies the following conditions:
24
(a) h ∈ L1 and
(b) either (b1) or (b2) holds:
(b1) h(x+) and h(x−) both exist and the integrals below are finite for some
; > 0:
%
0
h(x + t) − h(x+)
h(x − t) − h(x−)
dt,
dt,
t
t
0
−%
(b2) h(x) has bounded variation in some open neighborhood of x. (This
implies that h(x+) and h(x−) both exist. A sufficient condition for a
function to have bounded variation is having a derivative.)
Then
1
1
[h(x+) + h(x−)] =
PV
2
2π
∞
e−iux ĥ(u) du.
−∞
(In case where ĥ ∈ L we know that the last integral converges absolutely,
and is therefore an ordinary Lebesgue integral.)
We are now able to extend Parseval’s identity (Theorem A.3(b)). (Another reference for Parseval’s theorem is see for instance Titchmarsh, 1975,
Theorem 39.) Let µ be a signed measure with |µ| < ∞ and h ∈ L1 , and
suppose that the convolution
∞
y → (τµ h)(y) =
h(y − x) µ(dx)
1
−∞
satisfies assumption (b) of Theorem A.5, and that moreover (τµ h)(y) is continuous at y = 0. It can be seen that τµ h ∈ L1 (from Theorem A.1). Theorem
A.5 then yields
∞
∞
1
ĥ(u)µ̂(u) du.
(τµ h)(0) =
h(−x) µ(dx) =
PV
2π
−∞
−∞
To get Parseval’s identity, rewrite this by replacing h(x) with g(−x), after
noting that
∞
eiux g(−x) dx = ĝ(−u),
−∞
to get
∞
1
g(x) µ(dx) =
PV
2π
−∞
∞
ĝ(−u)µ̂(u) du.
−∞
We state the result obtained as a theorem.
25
(A.1)
Theorem A.6 Let µX be the distribution of a variable X (that is, µX is
the measure on R induced by the distribution function FX (x) = P{X ≤ x}).
Suppose that (i) g ∈ L1 , (ii) the function y → E g(y + X) is continuous at
y = 0 and (iii) g satisfies condition (b) of Theorem A.5.
Then
∞
∞
1
E g(X) =
g(x) µX (dx) =
ĝ(−u)µ̂X (u) du.
PV
2π
−∞
−∞
B
Proofs of theorems
Proof of Theorem 2.2. Let g(x) = I(a,a+h] (x). Then
E g(X) = FX (a + h) − FX (a)
and the conditions of Lemmas 2.1 and 2.2(a) are satisfied. We may thus
apply Theorem 2.1 with α = 0. The result follows from
ĝ(u) =
eiu(a+h) − eiua
.
iu
Proof of Theorem 2.3. This formula is equivalent to
∞
1
1 −iub
1
1 − FX (b) =
+
[e
ν̂X (u) − eiub ν̂X (−u)] du.
2 2π 0 iu
We prove the latter for b = 0; a translation X → X − b then finishes the
proof.
∞
If we let g(x) = I(0,∞) (x), then ĝ(u) = 0 eiux dx does not exist for real
u, but, if Im(z) > 0,
∞
1
ĝ(z) =
eizx dx = − .
iz
0
We thus apply Theorem 2.1: temporarily assume there exists α > 0 such
that E(eαX ) < ∞; then
∞
1
1 − FX (0) = E g(X) =
ĝ(−u + iα)ν̂X (u − iα) du.
PV
2π
−∞
Let
h(z) = ĝ(−z)ν̂X (z) =
26
E(eizX )
.
iz
The function h is analytic in {z | − α < Im(z) < 0} and has a pole at z = 0.
Hence,
h(z) dz = 0,
CM,
where CM,% is the closed path in Figure 1.
We know that E g(X) equals the integral of h(z) on the line
{z | Im(z) = −α}.
We also know that, for 0 ≤ y ≤ α,
i(M −iy)X E e
≤ E(eyX ) ≤ P{X ≤ 0} + E eαX I{X>0} = C < ∞.
Hence, on the segment {z | Re(z) = M, −α ≤ Im(z) ≤ 0},
C C
|h(z)| ≤ ≤
,
iz
M
αC
h(z) dz ≤
→ 0
M
M −iα
and so
M
as M → ∞. In the same way,
−M
−M −iα
h(z) dz → 0
as M → ∞. We conclude that
∞
ĝ(−u + iα)ν̂X (u − iα) du =
PV
−∞
lim
M →∞
h(z) dz,
LM,
where LM,% is the path going along the real axis from −∞ to −;, then around
the half-circle R% (Figure 1), then on the real axis from ; to +∞.
Next, let ; → 0+. The integral over the half–circle in the path LM,% is
(since dz = ;ieiθ dθ)
0
0
iθ
iθ
h(;e );ie dθ =
ν̂X (;eiθ ) dθ → π as ; → 0+ .
−π
−π
27
This implies
M
1
1
[h(u) − h(−u)] du
+ lim
1 − FX (0) =
2 M →∞ 2π 0
M
1
1
1
=
+ lim
[E(eiuX ) − E(e−iuX )] du
2 M →∞ 2π 0 iu
1
1 M 1
=
+ lim
E[sin(uX)] du.
2 M →∞ π 0 u
(B.1)
If we then let X = X − b, we find
E(eiuX ) = e−iub E(eiuX )
and so
1 − FX (b) = 1 − FX (0)
M
1
1
[e−iub ν̂X (u) − eiub ν̂X (−u)] du.
+ lim
=
2 M →∞ 2π 0
If E(eαX ) < ∞ for some α > 0, then we are finished. If not, then consider
X a = X ∧ a, where a > 0. Formula (B.1) holds for X a , and so
1
1
1 − FX (0) = 1 − FX a (0) =
+ lim
2 M →∞ π
0
M
1
E[sin(uX a )] du.
u
The result is proved if it is shown that
M
M
1
1
a
E[sin(uX)] du −
E[sin(uX )] du = 0.
lim
M →∞
u
u
0
0
The expression in square brackets is equal to
M X
M X
Ma
sin(y)
sin(y)
sin(y)
E
dy −
dy I{X>a} = E
dy I{X>a} ,
y
y
y
0
0
Ma
which tends to 0 as M → ∞, by dominated convergence.
28
Proof of Theorem 3.1. (a) Let K = ec . First, assume that P{S = 0} = 0.
If g(x) = (ec − ex )+ and z ∈ C,
c
eizx (ec − ex ) dx
ĝ(z) :=
−∞
c
= lim
(eizx+c − e(iz+1)x ) dx
M →∞ −M
izx (iz+1)x e c
e
c
−
= lim ec
M →∞
iz −M
(iz + 1) −M
1
1 −izM +c
1
1
(iz+1)c
−(iz+1)M
−
e
− e
.
+ lim
=e
M →∞
iz iz + 1
iz + 1
iz
The limit exists, and equals 0, if and only if, Im(z) < 0. Hence,
e(iz+1)c
,
ĝ(z) =
iz(iz + 1)
Im(z) < 0.
(This formula is in Lewis (2001).)
Let h(z) = ĝ(−z)µ̂X (z). We need to restrict z to Im(z) > 0 for ĝ(−z) to
exist, and therefore, we need to assume that E(eαX ) exists for some α < 0.
This proves the first formula in (a), if P{S = 0} = 0.
If P{S = 0} > 0, then define a new variable S ∗ with distribution
P{S ∗ ∈ A} =
Then
P{S ∈ A, S > 0}
.
P{S > 0}
E(K − S)+ = KP{S = 0} + P{S > 0}E(K − S ∗ )+
which yields the result, since
µS ∗ (u) =
µS ∗ (u)
.
P{S > 0}
The second formula in (a) follows from the usual relationship y+ −(−y)+ = y,
y = S − K.
Part (b) is obtained by first assuming that P{S = 0} = 0 and that there
exists α < 0 such that E(S α ) < ∞. The first formula in part (a) then holds.
29
The function h(z) is analytic in the upper complex plane, except for a simple
pole at the origin. We have
∞−iα
h(z) dz,
E(K − S)+ =
−∞−iα
where the path of integration is the line {z Im(z) = −α}. For 0 < ; < M ,
define a closed path of integration CM,% as in Figure 2. The integral of h(z)
along CM,% is 0.
It is easy to see that
−M −iα
M −iα
lim
h(z) dz = lim
h(z) dz = 0
M →∞
M →∞
−M
M
and so
∞
PV
−∞
h(u − iα) du =
h(z) dz +
+
−∞
R
−%
∞
h(z) dz,
%
where R% is the half–circle around the origin defined above. We find
0
h(z) dz = lim
h(;eiθ );ieiθ dθ
lim
%→0+
%→0+
R
=
=
π
E(e − e )+
X
iθ
lim
%→0+
π
Kπ.
Hence,
c
e−i%e +c
i;eiθ dθ
(−i;eiθ )(1 − i;eiθ )
0
1
K
+
=
2
2π
∞
[h(u) + h(−u)] du.
0
Since h(u) + h(−u) = 2Re[h(u)], we thus have
1 ∞
K
X
+
Re[ĝ(−u)µ̂X (u)] du.
E(K − e )+ =
2
π 0
(B.2)
This formula was obtained under the assumption that there exists α < 0
such that E(eαX ) < ∞. If this is not the case, then consider
X a = X ∨ (−a),
30
a
for a > 0. As a → ∞, E(eiuX ) → E(eiuX ) uniformly in u ∈ R. Since
|ĝ(u)| ∼
ec
,
u2
we find that
Xa
E(K − e )+ = lim E(K − e
X
a→∞
)+
1
K
+
=
2
π
∞
Re[ĝ(−u)µ̂(u)] du
0
by dominated convergence. Finally, (B.2) holds for all X.
The last formula may be proved another way. The function h(z) may be
rewritten as
e−izc ν̂X (z)
ec
.
−iz 1 − iz
Apart from the factor ec , this is the Fourier transform of the convolution
of an exponential density with the law of X. This suggests proceeding as
follows:
c
c
X
c
E(e − e )+ = e
(1 − ex−c ) dνX (x) = ec P{X + G ≤ c},
−∞
if G ∼ exp(1) is independent of X. By Theorem 2.3,
∞ c
1
e
ec
c
iuc ν̂X (−u)
−iuc ν̂X (u)
e P{X + G ≤ c} =
+
−e
e
du
2
2π 0 iu
1 + iu
1 − iu
∞
1
K
[h(u) + h(−u)] du,
+
=
2
2π 0
which is the same as (B.2). Finally, the formulas in (b) are found by taking
into account the cases where P{S = 0} > 0, as in the proof of (a).
31
Figure 1
Im
M
H 0 H
M
Re
RH
CM,H
iD
Figure 2
Im
iD
CM,H
M
RH
H
H
0
32
M
Re
© Copyright 2026 Paperzz