ITÔ CALCULUS AND DERIVATIVE PRICING WITH RISK-NEUTRAL
MEASURE
MAX CYTRYNBAUM
Abstract. This paper will develop some of the fundamental results in the theory of Stochastic
Differential Equations (SDE). After a brief review of stochastic processes and the Itô Calculus, we
set our sights on some of the more advanced machinery needed to work with SDE. Chief among our
results are the Feynman-Kac Theorem, which establishes a link between stochastic methods and
the classic PDE approach, and the Girsanov theorem, which allows us to change the drift of an Itô
diffusion by switching to an equivalent martingale measure. These results are also valuable on a
practical level, and we will consider some of their applications to derivative pricing calculations in
mathematical finance.
Contents
1. Introduction and Motivation
2. Itô Calculus and Brownian Motion
3. Stochastic Differential Equations
4. Itô Diffusions
5. Feynman-Kac and PDE
6. The Girsanov Theorem
7. Risk-Neutral Measure and Black-Scholes
Acknowledgments
References
1
1
5
7
11
13
17
19
20
1. Introduction and Motivation
In many applications, the evolution of a system with some fixed initial state is subject to random
perturbations from the environment. For example, a stock price may rise in the long term yet be
subject to seemingly random short-term fluctuations. Similarly, a small object in a liquid suspension
may experience random impulses from collisions with surrounding water particles. This leads us to
the consideration of a differential equation of the following form:
dXt
= b(t, Xt ) + σ(t, Xt ) · Wt
(1.1)
dt
where Wt represents “white noise”. The way to formulate this precisely is through Itô calculus and
Brownian motion. We will briefly review some of the basic results from Itô calculus. We assume
the reader has a working knowledge of measure theoretic probability and is comfortable with the
properties of conditional expectation.
2. Itô Calculus and Brownian Motion
We will closely follow the exposition in [4]. Let (Ω, F, P ) be a probability space.
Definition 2.1 (Filtration). We say that {Ft }t≥0 is a filtration on the space (Ω, F, P ) if
(1) Ft is sub σ-algebra of F for each t
1
2
MAX CYTRYNBAUM
(2) Ft ⊂ Fs for all t ≤ s.
A measurable function f : [0, ∞) × Ω → Rn is said to be adapted to the filtration {Ft }t≥0 if
ω → f (t, ω) is an Ft measurable function for each t.
The sub σ-algebra Ft represents the amount of information available to us at time t. Intuitively,
this says that the value of the Ft -adapted random variable ft (ω) can be determined by only using
information in the σ-algebras up to Ft . This inability to see into the future is important in
mathematical finance, where f (t, ω) may represent some investment strategy and Ft may be the
price information available up to time t. Thus, we cannot use future price information to determine
our strategy.
Definition 2.2 (Brownian Motion). We call a stochastic process B(t, ω) = Bt (ω) a version of
Brownian motion starting at x ∈ R when
(1) B0 = x for all ω
(2) For all times 0 ≤ t1 ≤ t2 ≤ · · · ≤ tn , B(t2 ) − B(t1 ), B(t2 ) − B(t3 ), · · · , B(tn ) − B(tn−1 ) are
independent random variables
(3) B(t + h) − B(t) is normally distributed with mean 0 and variance h for all t, h ∈ [0, ∞]
(4) B(t) is P -a.s. t-continuous
Similarly, we may define Bt ∈ Rn to be a Brownian motion if B(t) = (B1 (t), . . . , Bn (t)) where
each Bi (t) is a version of 1-dimensional brownian motion and all the Bi (t) are independent. For a
construction of Brownian motion, we refer the reader to [5].
Definition 2.3 (Martingale). A stochastic process {Mt } is said to be a martingale with respect
to the filtration {Ft }t≥0 if
(1) {Mt } is adapted to {Ft }t≥0
(2) E[|Mt |] < ∞ for all t
(3) For all 0 ≤ s < t and for a.s. ω
E[Mt |Fs ] = Ms
A martingale neither increases nor decreases on average and can thus be thought of as a model
of a fair game. From the above it is also clear that for any t we have E[Mt ] = E[M0 ]. In what
follows, we let {Ft }t≥0 be the filtration generated by {Bt } (i.e. Ft is the σ-algebra generated by
ω → B(t, ω) for each t.) Then using independence of increments and properties of conditional
expectation, we calculate:
E[Bt |Fs ] = E[Bt − Bs |Fs ] + E[Bs |Fs ] = E[B(t) − B(s)] + B(s) = B(s)
so that Brownian motion is a martingale with respect to its own filtration. For more information
on martingales in discrete-time, see [7]. We now attempt to find a precise interpretation of the
SDE (1.1). To do this, we rewrite (1.1) in the more suggestive form:
dXt = b(t, Xt )dt + σ(t, Xt )dBt
(2.1)
where we have interpreted white noise in terms of brownian motion as Wt dt = dBt , which leads to
the tentative integral equation:
Z
Z
Xt = b(t, Xt )dt + σ(t, Xt )dBt
(2.2)
Thinking
R in terms of Riemann-Stieltjes integration, it seems natural to interpret an integral of the
form f (t, ω)dBt (ω) as a limit of sums of the form:
X
f (tj , ω)∆Btj (ω)
j
ITÔ CALCULUS AND DERIVATIVE PRICING WITH RISK-NEUTRAL MEASURE
3
Intuitively, the increments f (tj , ω)∆Btj (ω) reflect the random change in Xt due to white noise. We
now make this notion precise.
Definition 2.4 (Itô Integrable). We say that a function f (t, ω) is Itô Integrable on [S, T ] if the
following conditions are satisfied:
(1) f (t, ω) is B ⊗ F measurable
(2) f (t, ·) is Ft -measurable for each t
Z T
2
(3) E
f (t, ω)dt < ∞
S
In this case we write f ∈ V[S, T ], following
X the notation in [4]. We say that a function φ(t, ω) is
elementary if φ ∈ V[S, T ] and φ(t, ω) =
φ(tj , ω)X[tj ,tj+1 ] where {tj } is a partion of the interval
j
[S, T ]. We define:
Z
T
φ dBt :=
S
X
φ(tj )∆Bj
j
where ∆Bj := B(tj+1 ) − B(tj ). This yields the following
Lemma 2.5 (Itô Isometry). If φ is elementary and φ ∈ V[S, T ], then we have
"Z
Z T
2 #
T
2
E
=E
f (t, ω)dt
f (t, ω)dBt
S
S
Proof.
"Z
2 #
T
E
f (t, ω)dBt
2
= E
X
S
f (tj , ω)∆Bj
j
X
=E
f (ti , ω)f (tj , ω)∆Bj ∆Bi
i,j
Now since the the ftj are Ftj adapted and by the independence of increments for Brownian motion,
we get that ∆Bj is independent from ∆Bi ftj fti when i < j. Since E(∆Bj ) = 0, E(ftj fti ∆Bj ∆Bi ) =
0 for i < j. Then the above simplifies to:
Z T
X
X
X
2
2
2
2
2
E
ftj (∆Bj ) =
∆tj E[ftj ] = E
ftj ∆tj = E
f (t, ω)dt
j
j
j
S
The first identity uses the fact the ∆Bj and ftj are independent and the result from definition (2.2)
that E(∆Bj2 ) = ∆tj .
We can now use this result to extend the Itô integral to all of V[S, T ]. Specifically, given f ∈
V[S, T ] let {φn } be a sequence of elementary functions such that:
Z T
2
(f (t, ω) − φn (t, ω)) dt → 0
as n → ∞
(2.3)
E
S
As the reader can check, Lemma 2.5 and equation (2.3) imply that {φn } is a Cauchy sequence in
L2 (P ), where P is the law of standard Brownian motion. Since L2 (P ) is complete, we can define:
Z T
Z T
f (t, ω)dBt (ω) = lim
φn (t, ω)dBt (ω)
(2.4)
S
n→∞ S
4
MAX CYTRYNBAUM
where the limit is taken in L2 (P ). Standard analysis arguments can be used to show that this limit
is independent of the chosen sequence {φn }. For a closer look at the existence
of such a sequence
R
{φn }, see [4]. We now provide a brief review of some of the properties of f dBt :
Proposition 2.6 (Properties of the Itô Integral). Let [S, T ] ⊂ R+ and let f ∈ V[S, T ].
Z T
(1) f →
f dBt is a linear operator on V[S, T ]
Z TS
(2) E
f dBt = 0
Z T S
f dBt is FT -measurable
(3)
S"
2 #
Z T
Z T
2
(4) E
f dBt
=E
f dt
S
S
These all follow by proving the statement for elementary functions and taking limits using equation (2.4). With a little more work we can show
Theorem 2.7 (Martingale Property). Let f ∈ V[0, T ], 0 ≤ t ≤ T .
Rt
(1) There exists a t-continuous version of 0 f dBs
Rt
(2) Put Mt (ω) = 0 f dBs (ω). Then Mt is a {Ft }t≥0 martingale.
Theorem 2.7(2) follows from the fact that Bt is a martingale with respect to its own filtration. We
refer the reader to page 32 of [4] for the proof. The above extends naturally to n dimensions. Let
V n,m [S, T ] be the space of n × m matrices v(t, ω) such that each vij (t, ω) ∈ V[S, T ]. Let Bt ∈ Rm .
Then we define
Z T v11 · · · v1m
Z T
..
.. dB
vdBt =
.
. t
S
S
vn1 · · · vnm
to be the n × 1 vector with ith component
m Z T
X
k=1
vik (t, ω)dBk (t, ω)
S
Definition 2.8 (Itô Process). Let v ∈ V[0, T ] and let u(t, ω) be a measurable stochastic process
adapted to the filtration {Ft }t≥0 and such that
Z t
P
|u(s, ω)|ds < ∞ for all t ≥ 0 = 1
0
Then we say that X(t, ω) is Itô Process if Xt has the form
Z t
Z t
Xt = X0 +
u(s, ω)ds +
v(s, ω)dBs
0
0
This extends naturally to the n-dimensional case by considering expressions of the form
Z t
Z t
X(t) = X(0) +
u ds
v dBs
0
0
where u is a 1 × n matrix and v is an n × m matrix with all components satisfying the conditions of
Definition 2.8. This leads us to the following fundamental theorem, which shows that Itô processes
are invariant under sufficiently smooth maps. This can be considered the stochastic version of the
chain rule.
ITÔ CALCULUS AND DERIVATIVE PRICING WITH RISK-NEUTRAL MEASURE
5
Theorem 2.9 (Itô Lemma). Let
Z
t
Z
t
v(s, ω)dBs
u(s, ω)ds +
X(t) = X(0) +
0
Rm ,
0
be an n-dimensional Itô process. Let f : [0, ∞) × Rn →
where f is a C 2 map. Define a new
stochastic process Y (t) = f (t, Xt ). Then Y(t) is an Itô process as above, and the ith coordinate is
given by
X ∂fi
X ∂ 2 fi
∂fi
dYi (t) =
(t, Xt )(dXk )(dXj )
(t, Xt )dt +
(t, Xt )dXk +
∂t
∂xk
∂xk ∂xj
k
k,j
where (dXi )(dXj ) is calculated according to the rule (dt)2 = 0, (dt)(dBk ) = (dBk )(dt) = 0 for all i,
and (dBk )(dBj ) = (dBj )(dBk ) = δjk dt.
We omit the proof (see page 46 of [4]) in order to start solving SDE as quickly as possible, merely
noting that the second order term above, notably absent in the classic chain rule, comes from the
fact that Brownian motion has non-zero quadratic variance.
Proposition 2.10 (Integration by Parts). Let Xt and Yt be 1-dimensional stochastic processes.
Then
d(Xt Yt ) = Xt dYt + Yt dXt + dXt · dYt
(2.5)
so that
Z
Z
Z
t
t
Xs dYs = Xt Yt − X0 Y0 −
0
t
Ys dXs −
0
(dXs )(dYs )
0
Proof. Let g : R2 → R, g(x1 , x2 ) = x1 x2 . Then g is smooth so applying the Itô lemma,
∂2g
∂g
∂g
1
2·
d(g(Xt , Yt )) =
(Xt , Yt ) +
(Xt , Yt ) +
(Xt , Yt )
∂x1
∂x2
2
∂x1 ∂x2
= Yt dXt + Xt dYt + (dXt )(dYt )
Then the integral equation above follows immediately from equation (2.5).
3. Stochastic Differential Equations
In this section, we illustrate the power of Itô’s Lemma by explicitly solving an elementary SDE.
We will then prove an important existence and uniqueness result for well-behaved SDE. Let us
consider a model for population growth in a stochastic environment.
Example 3.1 (Geometric Brownian Motion). Put θt = α + β · Wt where Wt is white noise and
define
dXt = θt Xt dt
This seems a reasonable model for a growing population subject to random environmental shocks.
Using the Brownian motion interpretation, we can rewrite this as
dXt = αXt dt + βXt dBt
Expecting some sort of exponential behavior, we apply the Itô Lemma to the
ln(x). Using Theorem 2.9, we get
1
d(f (Xt )) = f 0 (Xt )dXt + f 00 (Xt )(dXt )2
2
1
1 1
=
dXt −
(β 2 Xt2 dt)
Xt
2 Xt2
1
β2
=
dXt − dt
Xt
2
(3.1)
C2
function f (x) =
6
MAX CYTRYNBAUM
Combining this with equation 3.1, we get that
β2
d(ln(Xt )) = α −
dt + βdBt
2
so that
Xt
β2
ln
= α−
t + βBt
X0
2
or
β2
Xt = X0 exp
α−
t + βBt
2
2
From the law of iterated logarithm for Brownian motion (see [5]), we see that if α > β2 , Xt (ω) → ∞
as t → ∞ for a.s. ω, while for lower values of α the stochastic term takes over. Given empirically
determined coefficients α and β, when do we expect the population to reach a certain size N ? We
will return to this question later.
Using the Itô Lemma, a wide range of classic ODE methods such as matrix exponentiation and
integrating factors can be applied to suitable SDE. See the exercises in Chapter 5 of [4] for a wealth
of examples. Now we turn to the interesting question of existence and uniqueness of solutions to
SDE.
Lemma 3.2 (Gronwall Inequality). Let v(t) be a non-negative, Borel function such that for some
constants C, A, we have
Z t
v(t) ≤ C + A
v(s)ds
0
then we have
v(t) ≤ C exp(At)
Theorem 3.3 (Existence and Uniqueness of SDE). Let T ≥ 0 and b(·, ·) : [0, T ] × Rn → Rn ,
σ(·, ·) : [0, T ] × Rn → Rn×m be measurable functions such that we have
|b(t, x)| + |σ(t, x)| ≤ C(1 + |x|)
P
for some constant C and for all t ∈ [0, T ], x ∈ Rn (where |σ| = |σij |). Assume also that
|b(t, x) − b(t, y)| + |σ(t, x) − σ(t, y)| ≤ D|x − y|
(3.2)
(3.3)
Rn .
for some constant D and all t ∈ [0, T ], x, y ∈
Let Z be a random variable with finite second
(m)
moment such that Z is independent of F∞ := σ(Ft : t ≥ 0), where Ft is generated by Bt . Then
the SDE
dXt = b(t, Xt ) + σ(t, Xt )
X0 = Z(ω)
Z
has a unique t-continuous solution that is adapted to Ft := σ(Ft , σ(Z)) and
Z T
2
E
|Xt | dt < ∞
(3.4)
0
Rt
Rt
Proof. Uniqueness: Suppose there exists another solution X̃t = Z̃ + 0 b(s, X̃s )ds + 0 σ(s, X̃s )ds.
put f (t, ω) = b(t, Xt ) − b(t, X̃t ) and put g(t, ω) = σ(t, Xt ) − σ(t, X̃t ). Then we calculate
"
2 #
Z t
Z t
E(|Xt − X̃t |2 ) = E
Z − Z̃ +
f ds +
gdBs
0
0
"Z
"Z
2 #
2 #
h
i
t
t
2
≤ 3E (Z − Z̃) + 3E
f ds
+ 3E
gdBs
0
0
Z t
Z t
h
i
2
2
2
≤ 3E (Z − Z̃) + 3tE
f ds + 3E
g ds
0
0
ITÔ CALCULUS AND DERIVATIVE PRICING WITH RISK-NEUTRAL MEASURE
7
The above squares should be interpreted in the matrix sense. The first inequality uses (x+y +z)2 ≤
3x2 + 3y 2 + 3z 2 and the 2nd inequality uses the Itô isometry. Then equation (3.3) implies that the
above is
Z t
Z t
h
i
2
2
2
2
2
≤ 3E (Z − Z̃) + 3tD E
|Xs − X̃s | ds + 3D E
|Xs − X̃s | ds
0
0
Z t h
h
i
i
= 3E (Z − Z̃)2 + 3D2 (t + 1)
E |Xs − X̃s |2 ds
0
Condition (3.4) allows us to use Fubini in the second line. Put B = 3D2 (T + 1), A = 3E[|Z − Z̃|2 ].
Then with v(t) = E[|Xt − X̃t |2 ], we have that
Z t
v(s)ds
v(t) ≤ A + B
0
Then Gronwall implies that
v(t) ≤ A exp(Bt)
Set Z = Z̃, then we have that v(t) = 0 so that for a.s. ω, |Xt − X̃t | = 0. We get that
P [|Xt − X̃t | = 0
for all t ∈ Q ∩ [0, T ]] = 1
By the assumed t-continuity of solutions, we find that t → |Xt − X̃t | is continuous, so that we
actually have Xt = X̃t for all t ∈ [0, T ] for a.s. ω, which completes the proof of uniqueness. The
proof of existence uses Picard iterations and is very similar to the proof for ODE. Because of its
length, it is ommitted. See [4, p.70].
The solution given by the previous theorem is called a strong solution because it is constructed
with respect to a specific version of Brownian motion Bt . By a weak solution, we mean any pair
ct , B
ct ) that satisfy the equation
of processes (X
ct = b(t, X
ct )dt + σ(t, X
ct )dB
ct
X
(3.5)
An SDE has a strongly unique solution if any two solutions are equal for a.s. (t, ω). An SDE is said
to have a weakly unique solution if any two solutions have the same law. This distinction is subtle
but will be necessary for our proof of the Girsanov theorem. What we require is the following:
Lemma 3.4. Let b and σ be as in the previous theorem. Then any solution to equation (3.5) is
weakly unique.
4. Itô Diffusions
An Itô diffusion is a model of a well-behaved stochastic process that shares some fundamental
characteristics with nice processes like n-dimensional Brownian motion. Specifically,
Definition 4.1 (Itô Diffusion). We say a stochastic process Xt ∈ Rn is a an Itô diffusion if it
satisfies a stochastic differential equation
dXt = b(Xt )dt + σ(Xt )dBt
where b : Rn → Rn and σ : Rn → Rn×m are functions satisfying the conditions of Theorem 3.3.
Since we have no time argument, this just means that
|b(x) − b(y)| + |σ(x) − σ(y)| ≤ D|x − y|
for all
x, y ∈ Rn
Theorem 3.3 shows that the above is well-defined. The next concept is fundamental to the study
of both martingales and stochastic processes.
8
MAX CYTRYNBAUM
Definition 4.2 (stopping time). Let {Mt }t≥0 be a filtration on the probability space (Ω, M, P )
and let τ : Ω → R. We say that τ is a stopping time w.r.t the filtration {Mt }t≥0 if for each t we
have
{ω : τ (ω) ≤ t} ∈ Mt
Sometimes we will write {τ ≤ t} for {ω : τ (ω) ≤ t}. The intuition behind the above definition
is that one can know whether or not a stopping time has passed at time t without looking into
the future (only using the information in Mt ). Thus if Xt represents the value of a portfolio, and
{Mt }t≥0 is the filtration generated by Xt , then sup{t : Xt > 20} would not be a stopping time
while inf{t : Xt > 20} would be. The reader can show that any deterministic time t is trivially a
stopping time w.r.t. any filtration.
Proposition 4.3. Let τ1 and τ2 both be stopping times w.r.t. {Mt }. Then the following random
variables are also stopping times w.r.t. {Mt }
(1) τ1 ∧ τ2
(2) τ1 ∨ τ2
Here, (τ1 ∧τ2 )(ω) = τ1 (ω)∧τ2 (ω) := min(τ1 (ω), τ2 (ω)). The maximum in (2) is interpreted likewise.
Moreover, if τn is a sequence of stopping times such that lim τn = τ and such that, for each ω,
n→∞
τn (ω) ≤ τ (ω) for large n, then τ is a stopping time w.r.t the same filtration.
Proof. For (1) and (2) Note that {τ1 ∧ τ2 ≤ t} = {τ1 ≤ t} ∪ {τ2 ≤ t} ∈ Mt and that {τ1 ∨ τ2 ≤
t} = {τ1 ≤ t} ∩ {τ2 ≤ t} ∈ Mt . For the last statement, the reader can check that in this case we
have
∞ \
[
{τ ≤ t} =
{τm ≤ t} ∈ Mt
n=1 m≥n
Definition 4.4. Let {Mt } be a filtration on Ω and let M∞ be the smallest σ-algebra containing
all the Mt . By Mτ we mean the σ-algebra of all sets in M ∈ M∞ such that
M ∩ {τ ≤ t} ∈ Mt
Intuitively, Mτ is the σ-algebra of all events that occured before the stopping time τ . In the
following, Xts,x will mean a diffusion started at a point x ∈ Rn and started at time s. E s,x will
denote expectation w.r.t. the law afforded by Xts,x . One of the properties that Itô diffusions share
with Brownian motion is their “forgetfulness”. Formally, E x [X(t + h)|Ft ] = E X(t) [Xh ]. In fact,
more is true:
Theorem 4.5 (Strong Markov Property). Let Xtx ∈ Rn be an Itô diffusion such that X0 = x and
(m)
let τ be a stopping time w.r.t. the filtration {Ft }t≥0 . Then given any bounded Borel function
n
n
f :R →R ,
E x [f (Xτ (ω)+h )|Fτ ] = E Xτ (ω) [f (Xh )]
The proof involves a lot of technical bookkeeping, so we omit it (see page 117 of [4]). Let
M∞ = σ(Mt : t ≥ 0), where Mt is the σ-algebra generated by Xt . If we define the operator θt so
that θt (Xs ) = Xs+t . It can be shown that the strong Markov property extends to M∞ so that for
f a bounded, M∞ measurable function:
E x [θτ f |Fτ(m) ] = E Xτ (ω) [f ]
The next lemma is fundamental:
(4.1)
ITÔ CALCULUS AND DERIVATIVE PRICING WITH RISK-NEUTRAL MEASURE
9
Lemma 4.6. Let Yt be an Itô process in Rn such that
Z t
Z t
Yt = x +
u(s, ω)ds +
v(s, ω)dBs
0
0
(m)
{Ft }t≥0 .
Let τ be a stopping time w.r.t. the filtration
Let f ∈ C02 (Rn ) and let u and v above be
bounded on the set of (s, ω) such that Y (t, ω) ∈ supp(f ). Also, suppose E x [τ ] < ∞. Then we get
Z τ X
2
X
1
∂f
∂ f
(Ys )ui (s, ω) +
(Ys )(vv T )ij (s, ω) ds (4.2)
E x (f (Yτ )) = f (x) + E x
∂xi
2
∂xi ∂xj
0
i
i,j
Proof. Note that dYi (t) = ui dt + k vik dBk . Applying the Itô lemma to the C 2 function f , we find
that
X ∂f
1 X ∂2f
d(f (Y )) = f (Y0 ) +
dYi +
(dYi )(dYj )
∂xi
2
∂xi ∂xj
P
i
i,j
X ∂f
X ∂f
1 X ∂2f X
= f (x) +
ui dt +
vik dBk +
vik vjk dt
∂xi
∂xi
2
∂xi ∂xj
i
i,j
i,k
k
P
where we have used that (dYi )(dYj ) = k vik vjk dt from the Itô lemma. Note that this can be
rewritten as (vv T )ij . Writing this differential equation in integral form, replacing t with the stopping
time τ and taking an expectation, we get that
Z τ
2
X
∂f
∂ f
E x [f (Yτ )] = f (x) + E
(Ys )ui (s, ω) +
(Ys )(vv T )ij (s, ω)ds
∂xi ∂xj
0 ∂xi
i,j
Z τX
∂f
+E
(Ys )vik (s, ω)dBk
∂xi
0
i,k
Therefore it will suffice to prove that the last expectation is 0. Let g be a bounded borel function
on Rn s.t. |g| < K. then apparently g ∈ V[0, T ] for all T , so with k > 0 we calculate
Z τ ∧k
Z k
x
x
E
g(Ys )dBs = E
X{s≤τ } g(Ys )dBs = 0
(4.3)
0
0
because if τ is a stopping time then the product of g and the indicator X{s≤τ } is in V[0, T ], so we
can apply Proposition 2.6(2). Now note that
"Z
2 #
Z τ
Z τ ∧k
τ
x
x
2
E
g(Ys )dBs −
g(Ys )dBs
=E
g(Ys ) ds
0
τ ∧k
0
2
≤ K E x [τ − τ ∧ k]
and since E x [τ ] < ∞ we get that this tends to 0 as k → ∞. Note that the equality above uses the
Itô isometry. Then we have that
Z τ ∧k
Z τ
x
x
E
g(Ys )dBs → E
g(Ys )dBs
0
0
L2 (P x )
in
as k → ∞. Then equation (4.3) implies that the limit is identically 0, which completes
the proof of the lemma.
In many applications, we will wish to associate to a second-order partial differential operator
to each Itô diffusion. This operator encodes a wealth of information about the process Xt and
10
MAX CYTRYNBAUM
will allow us to forge a direct link between stochastic theory and PDE through the Feynman-Kac
formula.
Definition 4.7 (Infinitesimal Generator). Let Xt be an Itô diffusion. We define the infinitesimal
generator :
E x (f (Xt )) − f (x)
Af (x) = lim
t↓0
t
We let DA denote the set of functions f for which the above limit exists at all x ∈ Rn .
Theorem 4.8. Let Xt = b(Xt )dt + σ(Xt )dBt be an Itô diffusion in Rn . If f ∈ C02 (Rn ), then
f ∈ DA and for all x ∈ Rn we have that
X
∂f
1 X ∂2f
Af (x) =
bi (x)
+
(σσ T )ij (x)
∂xi 2
∂xi ∂xj
i
i,j
Proof. Apply Lemma 4.6 to the Itô diffusion Xt with τ = t. Using bounded convergence to pass
the limit through the expectation, the theorem is a simple consequence of the fundamental theorem
of calculus.
Thinking of the operator A as a derivative of sorts, the following important theorem can be seen
as a generalization of the fundamental theorem of calculus:
Theorem 4.9 (Dynkin’s Formula). Let f ∈ C02 (Rn ) and let Xt be an Itô diffusion as above. Assume
(m)
also that τ is a stopping time w.r.t. the filtration {Ft }t≥0 . Then we have
Z τ
x
x
E [f (Xτ )] = f (x) + E
Af (Xs )ds
0
Proof. This follows immediately from Theorem 4.8 and the above lemma.
Example 4.10 (Population Growth and Hitting Times). In Section 3, we considered a stochastic
model of population growth and produced the equation
β2
Xt = x exp
α−
t + βBt
(4.4)
2
If α >
β2
2 ,
Xt → ∞ for a.s. ω. If α <
β2
2 ,
Xt → 0 for a.s. ω. Given coefficients α and β, we ask
β2
(i) for α > 2 when do we expect the population to reach a certain size R?
2
(ii) for α < β2 , do we expect the population to ever reach size R?
For fixed γ ∈ R we can use Theorem 4.8 to compute Xt ’s infinitesimal generator for f (x) = xγ .
By letting f go to 0 in a smooth way, we may assume that xγ ∈ C02 (R). Then since dXt =
αXt dt + βXt dBt , we get
1 00
α2 (γ − 1)
0
2 2
γ
Af (x) = βxf (x) + f (x)α x = γx β +
(4.5)
2
2
and f (x) = xγ1 then from equation (4.5) we see that Af (x) = 0. We define
If we put γ1 = 1 − 2α
β2
τn = inf{t > 0 : Xt 6∈ [ n1 , R]. This is known as a hitting time, and it can be shown that the hitting
time for any Borel set is a stopping time w.r.t. {Ft }. Let g(x) = ln(x) for x < ln(R) and let g
be 0 outside some compact set containing [0, ln(R)] (i.e. let g go to 0 in some smooth way). Then
Theorem 4.8 implies that Ag(x) = α − 21 β 2 . Applying the Dynkin formula to g with the stopping
time τn ∧ k we get
Z τn ∧k
β2
x
x
E [g(Xτn ∧k )] = ln(x) + E
Ag(Xs )ds = ln(x) + (α − )E x [τn ∧ k]
2
0
ITÔ CALCULUS AND DERIVATIVE PRICING WITH RISK-NEUTRAL MEASURE
11
Since g is bounded, we can apply bounded convergence on both sides and let k → ∞. Since
τn ∧ k → τn pointwise for a.s. ω, the above implies that E x [τn ] < ∞. This allows us to apply
Dynkin to the function f (x) = xγ1 with the stopping time τn . Since Af = 0,
E x [f (Xτn )] = f (x) + 0 = x
1− 2α
2
β
for all n
(4.6)
If we let (1 − pn ) be the probability that Xt exits the bottom of the interval [ n1 , R] before it exits
at R, (4.6) becomes
γ1
1
(1 − pn )
+ pn (R)γ1 = xγ1
n
so that
γ1
x γ1
1
pn =
− (1 − pn )
R
R·n
x γ1
Since α < 21 β 2 implies that γ1 > 0, we see that lim pn =
. Let En be the event that Xτn = R.
n
R
Then En ⊂ En+1
[
A := {ω : Xt (ω) = R for some t > 0} =
En
n
So from basic probability,
!
P x (A) = P x
[
En
= lim pn =
x γ1
n
n
R
2
where x is the initial population x = X0 , which answers question (i). Now suppose that α > β2 .
For (ii), if we apply the Dynkin formula with f (x) = ln(x) as above, similar arguments can be used
to show that E x [τR ], the expected time for a population with initial size Xt to reach population R,
is given by
ln R
x
E x [τR ] =
α − 12 β 2
We have given the above proof in great detail to illustrate some important techniques, such as truncation of stopping times and limiting procedures with bounded convergence. For more information
on this model, see Exercise 7.9 of [4]. Using the A operator, we will now prove an interesting
connection between classic PDE theory and stochastic processes.
5. Feynman-Kac and PDE
Theorem 5.1 (Feynman-Kac). Let Xt ∈ Rn be an Itô diffusion with generator A. Let f ∈ C02 (Rn )
and q ∈ C(Rn ) such that q is lower bounded. Set
Z t
x
v(t, x) = E exp −
q(Xs )ds f (Xt )
0
Then we have that
(i)
∂v
= Av − qv
∂t
(5.1)
v(0, x) = f (x)
(5.2)
(ii) If there is another function g(t, x) ∈ C 1,2 (R×Rn ) such that g is bounded on K ×Rn for every
compact K ⊂ R, then if g satisfies (5.1) and (5.2) above we must have g(t, x) = v(t, x).
12
MAX CYTRYNBAUM
Z t
Rt
q(Xs )ds . We set Nt = − 0 q(Xs )ds
Proof. We let Zt denote the stochastic process exp −
0
and h(x) = exp(x). Then using the Itô lemma, we see that
1
d(h(Nt )) = h(Nt )dNt + h(Nt )(dNt )2 = −h(Nt )q(Xt )dt
2
so that we have
dZt = −Zt q(Xt )dt
(5.3)
Note that d(f (Xt )) can be calculated using Lemma 4.6. Using Proposition 2.10, we see that if
Yt := f (Xt ),
d(Zt Yt ) = Zt dYt + Yt dZt + (dZt )(dYt ) = Zt dYt + Yt dZt
because dZt = −Zt q(Xt )dt implies that (dZt )(dYt ) = 0 by the Itô lemma. Then Zt Yt is an Itô
process. By the above assumptions, Zt Yt is bounded for all ω, so using Fubini’s theorem, equation
(4.2) immediately implies that E x (f (Xt )Zt ) is differentiable. Then we directly calculate Av using
the limit definition:
1 x
1
E [v(t, Xs ) − v(t, x)] = E x [E Xs [Zt f (Xt )]] − E x [f (Xt )Zt ]
s
s
Z t
1
= E x [E x [f (Xt+s ) exp −
q(Xs+r )dr |Fs ] − E x [f (Xt )Zt |Fs ]]
s
0
The second equality follows from the Markov property and properties of conditional expectation.
We can rewrite this as
Z s
1 x
x
x
q(Xr )dr
− E [f (Xt )Zt ]|Fs
= E E f (Xt+s )Zt+s exp
s
0
Z s
1
1
q(Xr )dr − 1
= E x [Zt+s f (Xt+s ) − Zt f (Xt )] + E x f (Xt+s )Zt+s exp
s
s
0
A calculation similar to the one that produced (5.3) shows that we can rewrite
Z s
Z s
exp
q(Xr )dr =
Zr q(Xr )dr + 1
0
0
Z s
Since the integrand is continuous for a.s. ω, it follows that exp
q(Xr )dr − 1 is differen0
tiable, and its derivative at t = 0 is Z0 q(X0 ) = q(x). It follows from the lower boundedness of q
that for each t, Zt f (Xt ) is bounded for a.s. ω. Then since we already showed that E x [f (Xt )Zt ] is
differentiable with respect to t, we can apply bounded convergence to show that
1 x
∂
E [Zt+s f (Xt+s ) − Zt f (Xt )] → E x [f (Xt )Zt ] as s → 0
s
∂t
and
Z s
1 x
E f (Xt+s )Zt+s exp
q(Xr )dr − 1
→ q(x)v(t, x) as s → 0
s
0
Then we have shown that Av = ∂v
∂t − qv. For (ii), we refer the reader to p.142 of [4]
In the special case that q = 0 and with v(t, x) as above, we find that ∂v
∂t = Av, which is known
as the Kolmogorov Backward Equation. To illustrate the theorem’s power, we will consider two
examples of deterministic PDE solved with stochastic methods.
ITÔ CALCULUS AND DERIVATIVE PRICING WITH RISK-NEUTRAL MEASURE
13
Example 5.2 (Cauchy Problem). Let φ ∈ C02 (Rn ). We seek a bounded solution g of the initial
value problem
∂g(t, x)
1
= ∆x g(t, x)
(5.4)
∂t
2
g(0, x) = φ(x)
(5.5)
Let Xt = Bt ∈ Rn . Clearly dBt = In dBt , so using theorem (3.8) we get that Af = 21 ∆f for any
f ∈ C02 (Rn ). Let v(t, x) = E x [φ(Bt )]. Then since φ ∈ C02 (Rn ), we can apply Fubini to show that
Z
1
|x − y|2
x
dy
v(t, x) = E [φ(Bt )] =
φ(y)
n exp
2t
(2πt) 2
Rn
Then since φ is bounded, it is clear that for each t, we have x → v(t, x) ∈ C02 (Rn ). Therefore, for
each t we have that Avt (x) = 21 ∆vt (x). Applying the Kolmogorov backward equation, we conclude
that v(t, x) is a solution to (5.4) and (5.5) and is bounded because φ is.
Example 5.3. In this example, we will give an explicit formula for the solution u(t, x) to the initial
value problem
∂u
1
= λu + ∆u
∂t
2
u(0, x) = f (x)
where x ∈ Rn , λ is a constant and f ∈ C02 (Rn ) is given. Set
Z t
x
v(t, x) = E exp
−λds f (Bt )
0
i.e. set q(x) = −λ in the Feynman-Kac theorem. Then if we can prove that v(t, x) ∈ C02 (Rn ),
Feynman-Kac immediately implies that v(t, x) is a solution to the above initial value problem (we
showed above that for Brownian motion Ag = 12 ∆g for g ∈ C02 (R).) Using the density of Bt , we
compute the expectation:
Z
exp(−λt)
|x − y|2
x
v(t, x) = E [exp(−λt)f (Bt )] =
dy
(5.6)
n f (y) exp
2t
Rn (2πt) 2
whence it is clear that x → v(t, x) ∈ C02 (Rn ) for each t. Then v(t, x) is indeed a solution to the
above differential equation and is given explicitly by equation (5.6).
The next result tells us that if we change the drift coefficient of an Itô diffusion, the law of
the new process is absolutely continuous w.r.t. the original process. The Radon-Nikodym derivative provided by the theorem is important for doing computations with risk-neutral measure in
mathematical finance.
6. The Girsanov Theorem
First we prove a general result from probability theory:
Theorem 6.1 (Bayes’ Rule). Let (Ω, N ) be a measurable space equipped with a measure µ. Let
f ∈ L1 (µ) and put dν = f dµ. Let H be a sub σ-algebra of N . Then if X is an r.v. such that
Z
|X(ω)|f (ω)dµ(ω) < ∞
Ω
we get that
Eν [X|H] · Eµ [f |H] = Eµ [f X|H]
(6.1)
14
MAX CYTRYNBAUM
Proof. Using the elementary properties of conditional expectation, it will suffice to show that the
left and right hand side of (6.1) have the same expectation over any set H ∈ H:
Z
Z
Z
Z
Eν [X|H]dν
(6.2)
Xdν =
Xf dµ =
Eµ [Xf |H]dµ =
H
H
H
H
where the second equality uses the L1 (µ) condition above. Note that
Z
Eν [X|H]dν = Eµ [Eν [X|H]f XH ] = Eµ [Eµ [Eν [X|H]f XH |H]]
(6.3)
H
Since Eν [Xf ] and XH are H-measurable, we can pull them out of the inner expectation, giving
Z
Eµ [XH Eν [X|H] · Eµ [f |H]] =
Eν [X|H] · Eµ [f |H]dµ
H
Then we have shown that
Z
Z
Eν [X|H] · Eµ [f |H]dµ
Eµ [Xf |H]dµ =
H
H
for all H ∈ H, so (6.1) follows.
We will need the following
Theorem 6.2 (Lévy Characterization of Brownian Motion). Let X(t) ∈ Rn be a stochastic process
on the probability space (Ω, H, P ). Then the following are equvalent
(i) X(t) is a Brownian motion w.r.t. the probability measure P (i.e. P ◦ Xt−1 is the law of
Brownian motion on Rn ).
(ii) (a) X(t) is a martingale w.r.t. its own filtration under the measure P (i.e. EP [Mt |Ms ] =
Ms )
(b) Xi (t)Xj (t) − δij t is a martingale w.r.t. its own filtration under the measure P
We refer the reader to Peres (2010) for more information. We may now state the main theorem
of this section.
Theorem 6.3 (Girsanov Theorem). Let X(t) ∈ Rn be an Itô process such that X0 = 0 and for
fixed T with 0 ≤ t ≤ T ≤ ∞
dXt = a(t, ω)dt + dBt
where Bt ∈ Rn is standard Brownian motion w.r.t. the measure P. Define
Z t
Z
1 t 2
Mt = exp −
a(s, ω)dBs −
a (s, ω)ds
2 0
0
(m)
and suppose that Mt is a martingale w.r.t. the filtration {Ft } generated by Bt under the measure
(m)
P . Define the measure dQ = MT dP . Then Q is a probability measure on FT and Xt is ndimensional Brownian motion w.r.t. Q.
(m)
Proof. It is easy to show that Q is a probability measure on FT :
Q(Ω) = EP [MT ] = EP [M0 ] = 1
(m)
(m)
We note that on Ft , we actually have dQ = Mt dP . More precisely, let f be a bounded Ft
measurable function. Then we get
(m)
EQ [f ] = EP [f MT ] = EP [EP [f MT |Ft
(m)
]] = EP [f EP [MT |Ft
]] = EP [f Mt ]
-
ITÔ CALCULUS AND DERIVATIVE PRICING WITH RISK-NEUTRAL MEASURE
15
In context of the above definition of Mt , let dNt = −adBt − 12 a2 dt. Applying the Itô lemma to the
function g(x) = exp(x), we find that
1
exp(Nt )(dNt )2
2
!
X
X
1X 2
1
= Mt
−ai dBi (t) −
ai dt + Mt
a2i dt
2
2
i
i
i
!
X
= −Mt
ai dBi (t)
d(Mt ) = d(exp(Nt )) = exp(Nt )dNt +
i
Put Zt = Mt Xt . From (2.10), we have that
dZi (t) = Mt dXi (t) + Xi (t)dMt + (dMt )(dXi (t))
X
X
= Mt (ai dt + dBi (t)) + Xi (t)Mt
−aj dBj (t) + Bi (t)Mt
−aj dBj (t)
j
j
X
= Mt dBi (t) − Xi (t)
aj dBj (t)
(6.4)
j
In matrix notation, (6.4) can be written as Mt Vt dBt , where Vt ∈ Rn is defined by
Vj (t) = (δij − Xj (t)aj (t))
(m)
By Theorem 2.7, we have shown that Mt Xi (t) is a martingale w.r.t. {Ft
Using Bayes’ Theorem, we have that for s < t ≤ T ,
(m)
EQ [Xi (t)|Fs(m) ] =
EP [Xi (t)Mt |Fs
(m)
EP [Mt |Fs ]
]
=
} under the measure P .
Xi (s)Ms
= Xi (s)
Ms
(m)
So that Xi (t) is a martingale w.r.t. {Ft } under the measure Q. The proof that Xi Xj − δij t is
also a martingale is similar. By the Lévy characterization of Brownian motion, this completes the
proof.
Exponential martingales such as Mt above are actually a commonly used tool in stochastic
calculus. The following theorem is useful
Theorem 6.4 (Novikov Condition). Let Mt be defined as in the previous theorem. A sufficient
(m)
condition for Mt to be a martingale w.r.t. {Ft } under the measure P is that
Z t
2
a (s, ω)ds
<∞
E exp
0
For the proof, see, for instance, Karatzas and Shreve (1991). Any continuous, deterministic
function trivially satisfies the Novikov condition.
Theorem 6.5 (Girsanov II). Let Xt ∈ Rn be an Itô process
dXt = β(t, ω)dt + θ(t, ω)dBt
where Bt ∈ Rm . Now suppose that we can find α(t, ω) ∈ V n [0, T ] and u(t, ω) ∈ V n×m [0, T ] such
that
θ(t, ω)u(t, ω) = β(t, ω) − α(t, ω)
16
MAX CYTRYNBAUM
Define Mt as in the previous theorem with u(s, ω) in place of a(s, ω) and assume that this is a
(m)
(m)
martingale w.r.t. {Ft } under the measure P . Define dQ = MT dP on FT . Then the process
Z t
ct :=
B
u(s, ω)ds + Bt
(6.4)
0
is standard Brownian motion w.r.t. Q, and we can write
ct
dX(t) = α(t, ω)dt + θ(t, ω)dB
(6.5)
Proof. In view of Theorem 6.3, it suffices to prove that (6.5) holds. This is clear in view of (6.4):
dX(t) = β(t, ω)dt + θ(t, ω)dBt
ct − u(t, ω)dt)
= β(t, ω)dt + θ(t, ω)(dB
ct − (α(t, ω) − β(t, ω))dt
= β(t, ω)dt + θ(t, ω)dB
ct
= α(t, ω)dt + θ(t, ω)dB
The final version applies specifically to Itô diffusions and contains a helpful uniqueness statement:
Theorem 6.6 (Girsanov III). Let Xt ∈ Rn be an Itô diffusion and define Yt such that
dXt = b(Xt )dt + σ(Xt )dBt
Yt
n
V [0, T ].
= (b(Xt ) + γ(t, ω))dt + σ(Xt )dBt
where Bt ∈
and γ ∈
Suppose we can find u(t, ω) ∈ V m [0, T ] such that σ(Yt )u(t, ω) =
ct as in the last theorem, where Mt is a martingale w.r.t. {F (m) }
γ(t, ω). Define Mt , Q, and B
T
under the measure P . Then we get that
ct
dYt = b(Yt )dt + σ(Yt )dB
Rn
Also, with x ∈ Rn a starting point for both diffusions, the Qx law of Ytx is equal to the P x law of
Xtx .
Proof. This follows by applying Girsanov II with appropriate choices of α and β. The statement
about equivalence of laws is an immediate consequence of weak uniqueness of solutions to SDE
Lemma 3.4, which from Theorem 3.3 we can clearly apply to the above diffusions.
The theorem essentially states that we can change the drift coefficient of an Itô diffusion and it
will keep the same law up to a change of measure.
(m)
Remark 6.7. With Q, P , and MT as above, we clearly have that Q P . Suppose that A ∈ FT
and note that if Q(A) = 0 then EP [MT XA ] = 0 so that MT XA = 0 for P -a.s. ω. Since MT > 0 for
P -a.s. ω, we have that XA = 0 for P -a.s. ω. Then immediately P (A) = 0. This shows that P Q
so that P ∼ Q. Because of this, Q is sometimes called an equivalent martingale measure.
Example 6.8. Let
0
1
3
dX(t) =
dt +
dBt
1
−1 −2
1
3
0
u(t, ω) =
−1 −2
1
If we set
−3
we get that u =
. Since u trivially satisfies the Novikov condition, and
−1
−3
c
dBt :=
+ dBt
−1
ITÔ CALCULUS AND DERIVATIVE PRICING WITH RISK-NEUTRAL MEASURE
17
is standard Brownian motion w.r.t. the measure dQ = MT dP , where MT is the exponential
martingale defined as in the previous theorems. Girsanov also implies that we can rewrite X(t) as
#
"
c1 (t)
1
3
dB
X(t) =
c2 (t)
−1 −2 dB
7. Risk-Neutral Measure and Black-Scholes
In this section, we will apply some of the previous results to asset pricing theory and mathematical
finance. We will see that stochastic processes provide a natural framework for the analysis of
derivative securities. Our discussion is brief and informal. For a comprehensive introduction, see
[2]. Let (Ω, N , P ) be a probability space. In this context, we will model risky assets as random
variables on Ω. Consider the European call : at t = 0 an investor purchases the right to buy a
certain security at time T > 0 for a specified price K. Since the buyer is not obligated to exercise
this right if the price at time T if XT (ω) < K, the payoff at time T is given by:
max(XT (ω) − K, 0)
(7.1)
We let X(t, ω) be an Itô process representing the value of the security at time t. How should we
price such a contingent claim with payoff at time T given by (7.1)? One such procedure, known
as pricing by arbitrage, takes as given a collection of primitive securities (bonds, currencies, etc...)
with known price processes that can be used to price the claim. More precisely:
Definition 7.1 (Replicable Claims). A contingent claim CT with payoff at maturity given by
CT (ω) is said to be replicable if
(i) There exists a portfolio of primitive securities with price process S(t, ω) such that CT (ω) =
S(T, ω) a.s.
(ii) The portfolio is self-financing i.e. there are no net cash infusions into the portfolio between
t = 0 and maturity.
A simple economic argument can be used to show that in the absence of arbitrage the contingent
claim’s price is uniquely determined. In this situation, the claim is said to be priced by arbitrage. A
market in which all contingent claims can be priced by arbitrage is said to be complete. Using this
approach to price derivatives has the drawback of being computationally inefficient as a different
portfolio must be constructed to price each contingent claim. Luckily, we have another method:
Definition 7.2 (Risk-Neutral Measure). A measure Q on the space (Ω, N , P ) is said to be RiskNeutral if
(i) Q ∼ P i.e. Q P and P Q
(ii) Any price process X(t, ω) is a martingale w.r.t. its own filtration under the measure Q.
To explain the second condition, let us suppose that the market has some risk free security such
as a bond. By way of normalization, we discount every price process by the bond’s price process. If
we wish to create a “risk-neutral” measure and avoid arbitrage opportunities, the expected change
in value of every asset should be the same. Because the bond’s discounted expected value change
is 0, this must hold for all other assets, which is reflected in condition (ii).
Definition 7.3 (Risk-Neutral Price). Let X(t, ω) be the price process of an asset in a complete,
arbitrage free market. Let Q be a risk-neutral measure for the associated probability space. Given
a contingent claim CT on the security Xt we define the risk-neutral price F (CT ) by
XT
F (CT ) = EQ
V (T )
where V (t) is the price process of a fixed risk-free asset.
18
MAX CYTRYNBAUM
Note that the existence of such a measure Q is a consequence of a result known as the Fundamental Theorem of Asset Pricing. This brings us to a crucial result, the proof of which can be
found in any text on mathematical finance:
Theorem 7.4. Let CT be a replicable claim in a complete, arbitrage free market. Then the arbitrage
free price (using a replicating portfolio of primitive securities) and the risk-neutral price F (CT ) are
the same.
We will see that the Girsanov theorem gives us the tools to compute a claim’s risk-neutral price.
Example 7.5 (Black-Scholes Formula). Consider the Black-Scholes model where a certain security’s price process Xt obeys geometric Brownian motion:
dXt = αXt dt + βXt dBt
where α and β are constants. We think of αXt as the mean rate of change in price and βXt as the
asset’s uncertainty. Let V (t) be the price process of a bond, which we assume is a risk-free security.
dV (t) = rV (t)dt
Then the discounted price process for Xt is Zt =
g(x1 , x2 ) = xx21 to conclude that
Xt
Vt .
We can apply the Itô lemma to the function
dXt dVt
− 2 Xt
Vt
Vt
= αZt dt + βZt dBt − rZt dt
dZt =
= (α − r)Zt dt + βZt dBt
Let us use the Girsanov Theorem to get rid of the drift coefficient of the above diffusion. We need
to find a u(t, ω) such that
βZt u(t, ω) = (α − r)Zt
α−r
and clearly u = β will work. This is a constant, so it trivially satisfies the Novikov condition
whence by the Girsanov Theorem we get that
α−r
c
Bt =
t + Bt
β
is standard Brownian motion w.r.t. the measure Q, where dQ = MT dP and Mt is as in the Girsanov
ct . Using our
Theorem. Then the discounted price process Zt has the representation dZt = βZt dB
formula for the solution to the geometric Brownian motion SDE, this is just
1 2
c
Zt = Z0 exp − β + β Bt
(7.2)
2
From Theorem 2.7, it is clear that Zt is a martingale w.r.t. its own filtration under the measure
Q. From Remark 6.7, we also have that Q ∼ P . Then we have shown that Q is a risk-neutral
measure according to Definition 7.2. By Theorem 7.4, we now know how to price replicable claims
on the security Xt . We once again consider the European claim described above. According to the
theorem, the expectation to be calculated is
1
EQ [max(XT − K, 0)] = EQ [max(ZT − exp(−rT )K, 0]
VT
ct is
by evaluating the ODE for the bond price. From equation (7.2) and using the fact that B
standard Brownian motion w.r.t. the measure Q, the above expectation is given by
2
Z
1 2
1
−y
max Z0 exp − β T + βy − exp(−rT )K, 0 √
exp
dy
(7.3)
2
2T
2πT
R
A trivial computation shows that
1 2
Z0 exp − β T + βy − exp(−rT )K ≥ 0 iff y ≥ a
2
where a is defined by
Z0
1 2
1
− ln
+
β −r T
a=
β
K
2
Thus, we may rewrite eqn. (7.3) as
2
Z ∞
Z ∞
K
1 2
y
Z0
y2
√
√
exp − β T + βy −
dy − exp(−rT )
exp
dy
2
2T
2T
2πT
2πT
a
a
(7.4)
2
y
1
1 2
where we have merged the exponentials. Notice that − 2T
(y−βT )2 = − 2T
β T +βy− 2T
. Removing
the constants from the integrals, the above are just integrals of the density functions for two
normal random variables with distribution N (βT, T ) and N (0, T ) respectively. We can thus rewrite
equation (7.4) in terms of the CDF of the standard normal distribution as
X0 Φ(−w1 ) − K exp(−rT )Φ(−w2 )
(7.5)
a−βT
√
T
and w2 = √aT , and we have used the fact that Z0 = X0 , the initial price of the
where w1 =
security. Then Theorem 7.4 implies that the arbitrage-free price of the European call described
above is given by
F (CT ) = X0 Φ(−w1 ) − K exp(−rT )Φ(−w2 )
Acknowledgments
I would like to thank Peter May for organizing this REU, which has been both productive and
enjoyable. I also want to thank my mentors Marcelo Alvisio, Andrew Lawrie, and Casey Rodriguez
for their enthusiasm to meet with me every week and for the mathematical insights they shared
with me during the course of the REU.
20
MAX CYTRYNBAUM
References
[1] Kai Lai Chung, Lectures from Markov Processes to Brownian Motion, Springer, 1982
[2] Darrell Duffie Dynamic Asset Pricing Theory Princeton University Press 2001
[3] Gerald B. Folland, Real Analysis: Modern Techniques and Their Applications, Wiley-Interscience, Second Edition,
1999
[4] Bernt Øksendal, Stochastic Differential Equations, Wiley, Sixth Edition, 2007
[5] Peter Mörters and Yuval Peres, Brownian Motion, Cambridge University Press, 2010
[6] Rangarajan Sundaram “Equivalent Martingale Measures and Risk-Neutral Pricing: An Expository Note” Journal
of Derivatives 1997
[7] David Williams, Probability with Martingales, Cambridge University Press, 1991
© Copyright 2026 Paperzz