Enlargement of Filtration and Insider Trading

Enlargement of Filtration and Insider Trading
A. Es-Saghouani
Under supervision of
Dr. P.J.C. Spreij
Faculteit der Natuurwetenschappen, Wiskunde en Informatica ,
Korteweg-de Vries Institute for Mathematics,
Plantage Muidergrracht 24, 1018 TV Amsterdam, The Netherlands
A thesis submitted for candidacy for the degree
of Master of Mathematics
Amsterdam, 6 January 2006
Nederlandse Samenvatting
Het onderwerp van deze scriptie is Vergroting van Filtraties en Handelen met Voorkennis. De
meeste financiële verschijnselen worden bestudeerd met behulp van de theorie van martingalen.
Een belangrijk onderdeel van deze theorie is de filtratie, dat is een stijgende rij van σ-algebra’s.
In een financiële markt correspondeert een σ-algebra met alle publieke informatie die beschikbaar
is voor alle handelaren tot en met een tijdstip t. De bedoeling van deze scriptie is een korte
inleiding te geven in het vergroten van een filtratie en hoe dit gerelateerd is aan het handelen
met voorkennis. Wij doen dit onder de aanname dat de handelaar met voorkennis (iets) meer
informatie tot zijn beschikking heeft dan de gewone handelaar, dat kan bijvoorbeeld zijn de
prijs van een aandeel in de toekomst. In wiskundige termen is die extra informatie gegeven
als een stochastische variabele. Wij kijken naar de filtratie gegenereerd door de oorspronkelijke
filtratie en de σ-algebra gegenereerd door de stochast. We bestuderen hoe de objecten van
de oorspronkelijke filtratie eruit zien in de grote filtratie. Onder andere bestuderen we hoe het
probleem van het maximaliseren van de nutsfunctie van de twee handelaren kan worden opgelost,
en we kijken ook naar het verschil tussen de twee.
i
Abstract
We consider a probability space (Ω, F, P) equipped with two filtrations F = (Ft )t≥0 and G =
(Gt = Ft ∨ σ(G))t≥0 , where G is a random variable taking values in a Polish space. We give
a condition on G such that every F-semimartingale remains a G-semimartingale. Then we
transfer martingale representation theorems from F to G. We then use these theorems to solve
the problem of maximizing the expected logarithmic utility for an investor having the filtration
G at his disposal, and rewrite his additional expected logarithmic utility, with respect to an
investor hoe has only the filtration F at his disposal, in terms of relative entropy. At last we
give another approach for the problem of enlargement of filtrations.
ii
Contents
Nederlandse Samenvatting
i
Abstract
ii
Introduction
iv
1 Enlargement of filtrations
1.1 Initial enlargement of filtrations (I.E.F) . . . . . . . . . . . . . . . . . . . . . . .
1.2 Stochastic exponential representation of the process 1/pG . . . . . . . . . . . . .
1.3 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1
1
6
8
2 Martingale representation theorems for I.E.F
2.1 Notations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.2 The martingale preserving probability measure . . . . . . . . . . . . . . . . . . .
2.3 Martingale representation theorems for I.E.F . . . . . . . . . . . . . . . . . . . .
10
10
11
14
3 Insider trading and utility maximization
3.1 The ordinary investor problem . . . . . . . . . . . . . . . . . . . . . . .
3.2 Utility Maximization . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.2.1 The model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.2.2 Solution of the logarithmic utility maximization problem . . . .
3.2.3 Insider’s additional expected logarithmic utility . . . . . . . . . .
3.2.4 Explicit calculations of the insider’s expected logarithmic utility
.
.
.
.
.
.
21
22
23
24
26
29
30
theorem
. . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . .
33
33
35
37
A Some Important Theorems and Lemma’s
A.1 Appendix Chapter 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
A.2 Appendix Chapter 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
A.3 Appendix Chapter 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
38
38
39
39
4 Enlargement of filtrations and Girsanov’s
4.1 Preliminaries and notations . . . . . . . .
4.2 Girsanov-type for change of filtrations . .
4.3 Conclusion . . . . . . . . . . . . . . . . .
iii
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Introduction
In the past decades an extensive mathematical theory using martingale techniques has been
developed for the problems of characterization of no arbitrage, hedging and pricing of financial
derivatives and utility maximization of investors in financial markets. One of the important
features of this theory is the assumption of one common information flow (filtration) on which
the portfolio decisions of all economic agents are based. In this thesis we give a short introduction
to enlargement of filtration and insider trading. We do this by considering a financial market that
is a probability space (Ω, F, P) equipped with a filtration F = (Ft )t∈[0,T ] (public information).
While the ordinary trader makes its decisions based on the information flow (Ft )t∈[0,T ] , the
insider possesses from the beginning extra information about the outcome of some random
variable G taking values in a Polish space (U, U), foe example, the future price of a stock. The
insider’s information flow is therefore described by the enlarged filtration G = (Gt )t∈[0,T ] =
(Ft ∨ σ(G))t∈[0,T ] . This thesis is based on articles of Amendinger [1, 2, 3], Jacod [7, 9], Pikovsky
and Karatzas [13] and Ankirchner [4].
The problem of the enlargement of filtrations consists of the following three important issues:
1. Give conditions on the random variable G such that every F-semimartingale become a
G-semimartingale.
2. If part 1 is satisfied, give the decomposition of G-semimartingales.
3. If a version of the martingale representation holds under the filtration F, find a version of
the martingale representation theorem with respect to the enlarged filtration G.
For the insider trading, we will be interested in its expected logarithmic utility maximization
and give some example where we obtain explicit formulae for the utility gain.
The outline of this thesis is as follows. In Chapter 1 we give most of the results of Jacod [9]
and Amendinger [3]. For this we fix a time horizon T . If the regular conditional distributions
of the random variable G given Ft , t ∈ [0, T ] are absolutely continuous with respect to the law
of G, Jacod [9] proved that every (P, F)-semimartingale remains a (P, G)-semimartingale on
the interval [0, T ], and gave the canonical decomposition of (P, F)-semimartingales in G which
involves the conditional density process q l , l ∈ U . For most of other results we will assume
the equivalence between the regular conditional probabilities of G given Ft , t ∈ [0, T ) and the
unconditional law of G. Based on this assumption we give also an exponential representation of
their conditional density process pG .
In Chapter 2, we give sufficient conditions such that the existence of martingale measures,
under which the stock price process S is a martingale, in the filtration F implies their existence
iv
Enlargement of Filtration & Insider Trading
v
in the enlarged filtration G. Moreover, we show that the density process of an equivalent Gmartingale measure that decouples the σ-algebras F and σ(G) is the product of the density
process of an equivalent G-martingale measure and the process 1/pG . And we use these two
results to show the inheritance of the martingale representation theorem in the enlarged filtration
G.
In Chapter 3, we treat the insider’s problem of maximizing his expected logarithmic utility. In
this case we give the optimal strategy and also the expression of the utility gain. We establish
also a relationship between the additional expected logarithmic utility of the insider and the
relative entropy of the probability measure P with respect to the probability measure P̃ defined
on (Ω, GT ) by the process 1/pG .
In Chapter 4 we treat another aspect of the problem of enlargement of filtrations, studied
by Ankirchner [4]. He considered enlargement of the filtration F by an other filtration K, this is
done by studying the filtration
G = {Gt := ∩s>t (Fs ∨ Ks ), t ≥ 0},
and he replaces Jacod’s condition by the a condition inspired from the notion of the decoupling
measure. The idea is that the enlargement of the filtration can be interpreted as a change from
the decoupling measure to the original measure. Then Girsanov’s theorem is used to obtain the
semimartingale decomposition relative to the enlarged filtration G.
The notations in this thesis will be the same for all the chapters.
Chapter 1
Enlargement of filtrations
1.1
Initial enlargement of filtrations (I.E.F)
Let (Ω, F, P) be a probability space with a filtration F = (Ft )t∈[0,T ] satisfying the usual conT
ditions, i.e., the filtration F is right-continuous (Ft = >0 Ft+ ) and each Ft contains all
(F, P)-null sets). T ∈ (0, ∞] is a fixed time horizon, and we assume that the σ-algebra F0 is
trivial. Let G be an F-measurable random variable with values in a Polish space (U, U).
Definition 1.1.1. A continuous adapted stochastic process X is called a semimartingale if it
has a representation of the form X = X0 + M + A. Where M is a continuous local martingale
and A is a continuous adapted process of locally bounded variation and M0 = A0 = 0.
Definition 1.1.2. Given the notations above, we call the filtration G defined by :
Gt := Ft ∨ σ(G), t ∈ [0, T ]
the initially enlarged filtration of F.
In this chapter we will assume that the enlarged filtration G satisfies the usual conditions.
Therefore we redefine G as follows : for every t,
Gt = ∩>0 (Ft+ ∨ σ(G)).
The following theorem is due to J. Jacod. Before we give the content of the theorem we introduce the following hypothesis called ”l’hypothèse (H’)”: every F-semimartingale is a Gsemimartingale.
Theorem 1.1.3. L’hypothèse (H’) is satisfied under the following condition:
(A) For every t there exists a positive measure σ-finite ηt on (U, U) such that P[G ∈ ·|Ft ](ω) ηt (·) almost surely in ω, where P[G ∈ ·|Ft ](ω) stands for a regular version of the conditional law
of G with respect to Ft .
Proof. For a detailed proof of this result we refer the reader to the proof of Theorem 1.1,
Jacod [9].
Remark 1.1.4. For the existence of the regular conditional probabilities of G with respect to
each Ft see Theorem A.1.1 and Corollary A.1.2 in the Appendix, or Shiryaev [15].
1
Enlargement of Filtration & Insider Trading
2
Proposition 1.1.5. The condition (A) is equivalent to the condition (A’):
There exists a positive measure σ-finite η on (U, U) such that P[G ∈ ·|Ft ](ω) η(·) for all
t > 0, ω ∈ Ω.
In this case we can take for η the law of the variable G.
Proof. It is clear that we only need to prove that (A) implies (A’), with η the law of G. Fix
t > 0 and suppose that (A) is satisfied. Then by Doob’s Theorem, see Theorem A.1.3 in
x
the Appendix, there exists a Ft ⊗ U-measurable
x positive function: (ω, x) 7→ qt (ω) such that
x
P[G ∈ dx|Ft ](ω) = ηt (dx)qt (ω). Let bt (x) = E qt and
( x
qt (ω)
if bt (x) > 0,
bt (x)
q̂tx (ω) =
0
otherwise
Because qtx = 0 a.s. if bt (x) = 0, we have qtx = q̂tx bt (x) a.s., and the measure ηt (dx)bt (x)q̂tx (ω) is
still a version of P[G ∈ dx|Ft ](ω). Hence for every Ft -measurable positive function g we have
Z
hZ
i
g(x)η(dx) = E g(G) = E
g(x)P[G ∈ dx|Ft ](dω)
Z
Z
x
= g(x)E qt ηt (dx) = g(x)bt (x)ηt (dx),
whence ηt (dx)bt (x) = η(dx), thus P[G ∈ dx|Ft ](ω) = η(dx)q̂tx (ω).
N.B. Doob’s derivation theorem gives joint measurability of (ω, x) 7→ q(ω, x), whereas the
Radon-Nikodym Theorem only gives x 7→ q(ω, x) is measurable for all ω.
Now for the rest we will assume the following:
Assumption 1.1.6. The condition (A’) is satisfied: there exists a σ-finite positive measure η
on (U, U) such that for all t ∈ [0, T ), the regular condition distribution of G given Ft is absolutely
continuous with respect to η for P-almost all ω ∈ Ω, i.e,
P[G ∈ ·|Ft ](ω) η(·) for P-a.a. ω ∈ Ω.
(1.1)
Remark 1.1.7. The measure η is not necessary the law of G, because sometimes another simple
measure could be taken, like the Lebesgue measure on U = Rd .
We introduce the following notations : let H := (Ht )t∈[0,T ] , where H ∈ {F, G}, be a generic
T
filtration, H0 := (Ht )t∈[0,T ) , Ω̂ := Ω × U , Ĥt := >0 (Ht+ ⊗ U), Ĥ := (Ĥt )t∈[0,T ] and Ĥ0 :=
(Ĥt )t∈[0,T ) . The fact that the time horizon T is included or excluded is of importance, as we
shall see in the section of examples that are given later on.
1
Let K = (Kt )t∈[0,T ] = (Kt1 , . . . , Ktd )>
t∈[0,T ] be a d-dimensional continuous local F-martingale
with quadratic variation hKi = hK i , K j i i,j=1,...,d taken with respect to F.
Definition 1.1.8. We call the optional σ-algebra on Ω̂ × [0, T ), the σ-algebra generated by the
càdlàg Ĥ0 -adapted stochastic processes, and we denote it by O(Ĥ0 ).
1
where by > we mean the transpose.
Enlargement of Filtration & Insider Trading
3
Definition 1.1.9. We call the predictable σ-algebra on Ω̂ × [0, T ), the σ-algebra generated by
the left-continuous Ĥ0 -adapted stochastic processes, and we denote it by P(Ĥ0 ).
The following lemma provides a ‘nice’ version of the conditional density process q l resulting
from the absolute continuity in Equation (1.1).
Lemma 1.1.10 (Lemme 1.8 and corollaire 1.11 of Jacod [9]). Suppose Assumption 1.1.6
is satisfied. Then:
1. There exists a non-negative O(F̂0 )-measurable function (ω, l, t) 7→ qtl (ω) which is rightcontinuous with left limits in t and such that:
l are strictly positive on
a. for all l ∈ U , q l is an F0 -martingale, the processes q l , q−
[0, T l ), and q l = 0 on [T l , T ), where
T l := inf{t ≥ 0|qtl− = 0} ∧ T ;
(1.2)
b. for all t ∈ [0, T ), the measure qtl (.)η(dl) on (U, U) is a version of the conditional
distribution P[G ∈ dl|Ft ].
2. T G = T P-a.s., where T G (ω) = T G(ω) (ω) = T l (ω) on {G = l}.
Remark 1.1.11. The conditional density process q l is the key to the study of continuous
local F-martingales in the enlarged filtration G0 . The following theorem shows that under
Assumption 1.1.6, every continuous local F-martingale is a G0 -semimartingale, and explicitly
gives its canonical decomposition.
Theorem 1.1.12 (Théorème 2.1 of Jacod [9]). Suppose Assumption 1.1.6 is satisfied. For
i = 1, . . . , d, there exists a P(F̂0 )-measurable function (ω, l, t) 7→ (ktl (ω))i such that
Z
l
i
l
hq , K i = (k l )i q−
dhK i i.
(1.3)
For every such a function k i , we have
Rt
1. 0 |(ksG )i |dhK i is < ∞ P-a.s. for all t ∈ [0, T ), where k G = k l on {G = l}, and
2. K i is a G0 -semimartingale, and the continuous local G0 -martingale in its canonical decomposition is given by:
Z t
K̃ti := Kti −
(ksG )i dhK i is ,
t ∈ [0, T ).
(1.4)
0
Remark 1.1.13. If the absolute continuity in Assumption 1.1.6 holds for all t ∈ [0, T ], then K̃
is even a local G-martingale.
Let now take a look at the conditional density process q l . Since F0 is trivial, we have
Z
Z
P[G ∈ dl] = P[G ∈ A] = P[G ∈ A|F0 ] =
q0l η(dl),
A
A
Enlargement of Filtration & Insider Trading
4
for all A ∈ U. By choosing U smaller if necessary, we can therefore assume that q0l > 0 for all
l ∈ U , so we obtain for P-a.a. ω and all t ∈ [0, T )
Z
Z
plt P[G ∈ dl],
qtl (ω)η(dl) =
P[G ∈ A|Ft ](ω) =
A
A
where
plt (ω) :=
qtl (ω)
.
q0l
(1.5)
From this, we observe that we can take p as the process q appearing in Lemma 1.1.10 and in
Theorem 1.1.12 by choosing for η the law of G. In the following we will write just pG , but we
mean by this that ”pG = pl on {G = l}”. By part 2 of Lemma 1.1.10, the first time pG hits 0
is P-a.s. equal to T so that we can consider the process p1G on [0, T ). If the regular conditional
distributions of G given Ft are equivalent to the law of G, then the process p1G turns out to be a
positive G0 -martingale starting from 1 and thus defines a probability measure P̃t on (Ω, Gt ) for
all t ∈ [0, T ). P̃t coincides with P on Ft , and the σ-algebras Ft and σ(G) become independent
under P̃t . These properties are shown in the following proposition due to Amendinger [3], p.267.
Proposition 1.1.14. Suppose that the regular conditional distributions of G given Ft are equivalent to the law of G for all t ∈ [0, T ), i.e., for all l ∈ U , the process (plt )t∈[0,T ) is strictly positive
P-a.s. then:
1. For t ∈ [0, T ), the σ-algebras Ft and σ(G) are independent under the probability measure
Z
1
P̃t (A) :=
dP for A ∈ Gt ,
(1.6)
G
A pt
i.e., for At ∈ Ft and B ∈ U,
P̃t [At ∩ {G ∈ B}] = P[At ]P[G ∈ B] = P̃t [At ]P̃t [G ∈ B].
2.
1
pG
(1.7)
is a G0 -martingale.
Proof. To prove Equation (1.7), fix At ∈ Ft and B ∈ U. By conditioning on Ft , we obtain
h
h
ii Z
h
i
h
1 i
1
1
E 1At ∩{G∈B} G = E 1At E 1{G∈B} G |Ft =
E 1{G∈B} G |Ft (ω)P(dω).
pt
pt
pt
At
The definition of plt (ω) yields
Z
h
i
1
1 l
E 1{G∈B} G |Ft (ω) =
pt (ω)P[G ∈ dl] = P[G ∈ B],
l
pt
B pt (ω)
and so we get the first equality in Equation (1.7). The second follows by choosing At = Ω or
B = U.
Enlargement of Filtration & Insider Trading
5
Now fix 0 ≤ s ≤ t < T and choose A ∈ Gs of the form A = As ∩ {G ∈ B} with As ∈ Fs and
B ∈ U. Then we obtain by Equation (1.7) and by reversing the above argument that
h
h
i
1 i
E 1A G = P[As ]P[G ∈ B] = E 1At P[G ∈ B]
pt
Z Z
1
pl (ω)P[G ∈ dl]P(dω)
=
l (ω) s
p
As B s
h
h
ii
h
1
1 i
= E 1As E 1{G∈B} G |Fs = E 1A G .
ps
ps
io
n
h
i
h
Now let D = A ∈ Gs : E 1A p1G = E 1A p1G , we show now that D is a d-system.
t
s
h
i
i
h
1. Ω ∈ D, indeed, from the first part of the proof we have that E 1Ω p1G = E 1Ω p1G .
t
s
2. Let A, B ∈ D such that A ⊂ B. Then we have that
h
h
h
h
1 i
1 i
1 i
1 i
E 1B\A G = E (1B − 1A ) G = E 1B G − E 1B G
pt
pt
pt
pt
h
i
h
i
h
1
1
1 i
= E 1B G − E 1B G = E (1B − 1A ) G
ps
ps
pt
i
h
1
= E 1B\A G ,
ps
hence B\A ∈ D.
3. Let An ↑ A∞ with An ∈ D for all n ≥ 0. Since the process 1/pG is strictly positive, then
by the monotone convergence theorem we have that
h
h
1 i
1 i
E 1A∞ G = lim E 1An G
n→∞
pt
pt
h
1 i
= lim E 1An G
n→∞
ps
h
1 i
= E 1A∞ G ,
ps
thus A∞ ∈ D.
Hence D is a d-system that contains the π-system
n
h
h
1 i
1 io
C = As ∩ {G ∈ B}; with As ∈ Fs and B ∈ U : E 1As ∩{G∈B} G = E 1As ∩{G∈B} G
ps
pt
generating the σ-algebra Fs ∨ σ(G), and by Proposition 2.2.6 below, we have Gs = Fs ∨ σ(G).
Therefore, Gs ⊂ D ⊂ Gs . Whence this extends to arbitrary sets A ∈ Gs . Hence the process p1G
is a G0 -martingale with p1G = 1, hence Equation (1.6) defines indeed a probability measure on
(Ω, Gt ).
0
Enlargement of Filtration & Insider Trading
1.2
6
Stochastic exponential representation of the process 1/pG
This section shows that under the assumption of Proposition 1.1.14, the processes pl and 1/pG
can be represented as stochastic exponentials of a particular form. More precisely, the F0 martingale pl is the stochastic exponential of the sum of a stochastic integral with respect to K
with integrand κl and an orthogonal local F0 -martingale, whereas the G0 -martingale p1G can be
written as a stochastic exponential of the sum of a stochastic integral with respect to κG with
respect to K̃ and an orthogonal local F0 -martingale. To do this, we give the following lemma
without proof.
Lemma 1.2.1. Under Assumption 1.1.6, there exists an Rd -valued, P(F0 ) ⊗ U-measurable process (κlt )t∈[0,T ) such that for all l ∈ U ,
 R
t
l (1) dhK (1) i
k
s
s
t
 0
.
l

.
dhKis κs = 
.
0
R t l (d)
dhK (d) is
0 ks
Z


,

t ∈ [0, T ).
(1.8)
Proof. For the proof of this result we refer to the reader to Amendinger [3].
For further developments, we need a week integrability condition on κ.
Assumption 1.2.2. : The process κ from Lemma 1.2.1 satisfies
Z
T
κls
>
dhKis κls < ∞
P-a.s. for all l ∈ U.
(1.9)
0
Remark 1.2.3. The process κG is P(G 0 )-measurable, indeed we need
only to show the measurability of the mapping (ω, t); P(G0 ) → (ω, t, G(ω)); P(F0 ) ⊗ U .
For any A ∈ P(F0 ) and B ∈ U, we have
{(ω, t) : (ω, t) ∈ A and G(ω) ∈ B} = A ∩ {ω : G(ω) ∈ B} × [0, T )
= A ∩ {ω : G(ω) ∈ B} × {0}
∩ {ω : G(ω) ∈ B} × (0, T ) ,
and therefore we have the measurability of the mapping above. By the measurability of κlt we
R G >
get the measurability of κG . And so the stochastic integral
κ
dK̃ is well defined under
l
Assumption 1.2.2. For each l ∈ U , the process
>to null sets with respect to
R l >κ is unique
R up
P × hKi, and so the stochastic integrals
κ dK and
κG dK̃ do not depend on the
>
choice of κ. Finally, we can now write K̃ := K̃ 1 , . . . , K̃ d more compactly as
Z
K̃ = K − dhKiκG .
Enlargement of Filtration & Insider Trading
7
Proposition 1.2.4.
1. Suppose that the regular conditional distributions of G given Ft are equivalent to the law
of G for all t ∈ [0, T ). Then there exists a local G0 -martingale Ñ null at zero which is
orthogonal to K̃ from Equation (1.4) (i.e., hK̃ (i) , Ñ i for i = 1, . . . , d) and such that
Z
1
G >
=
E
−
(κ
)
d
K̃
+
Ñ
,
t ∈ [0, T ).
(1.10)
s
s
t
pG
t
2. Fix l ∈ U . If pT l − > 0 P-a.s., then there exists a local F0 -martingale N l null at zero which
is orthogonal to K and such that
Z
t ∈ [0, T ).
(1.11)
(κls )> dKs + N l ,
plt = E
t
Proof. See Proposition 2.9, p. 270, of Amendinger [3].
Remark 1.2.5. If the regular conditional distributions of G given Ft are equivalent to the law
of G for all t ∈ [0, T ), then the condition in the second part of Proposition 1.2.4 is automatically
satisfied for all l ∈ U .
The next corollary gives an explicit expression for Ñ in Equation (1.10) in terms of NG , if
is continuous for all l ∈ U . As a consequence, we obtain then in particular that 1/pG can
be written as a stochastic exponential of a stochastic integral with respect to K̃, if we have in
addition a martingale representation theorem for the filtration F.
pl
Corollary 1.2.6.
1. If pl is continuous and strictly positive for all l ∈ U , then
Z
1
G >
G
G
=
E
−
(κ
)
d
K̃
−
N
+
hN
i
,
s
s
t
pG
t
t ∈ [0, T ).
(1.12)
In particular, Ñ from Equation (1.10) is given by
Ñt = −NtG + hN G it ,
2. In particular, if pl = E
t ∈ [0, T ).
(κl )> dK for all l ∈ U , then
Z
1
G >
=
E
−
(κ
)
d
K̃
,
t ∈ [0, T ).
t
pG
t
(1.13)
R
Proof. See Corollary 2.10, p. 272, of Amendinger [3]
(1.14)
Enlargement of Filtration & Insider Trading
1.3
8
Examples
This section illustrates the preceding results by several examples for G. We will give an example
where we have equivalence between the regular conditional probability of G given Ft and the law
of G for t ∈ [0, T ), an example where we have absolute continuity for t ∈ [0, T ] and equivalence
only for t ∈ [0, T ), and an example where we have equivalence for all t ∈ [0, T ].
Example 1.3.1. Let G be the end point WT of a one-dimensional F-Brownian motion W on
[0, T ]. Then we have G = ∩>0 (Ft+ ∨ σ(WT )) and we have for all t < T
P WT ∈ dl|Ft = P (WT − Wt + Wt ) ∈ dl|Ft
= P (WT − Wt ) ∈ (dl − y) Wt =y
(l − Wt )2 =p
exp −
dl
2(T − t)
2π(T − t)
= plt P WT ∈ dl ,
1
where
r
plt
=
(l − W )2
T
l2 t
exp −
+
,
T −t
2(T − t)
2T
l ∈ R,
is strictly positive for all t < T . Furthermore, applying Itô’s formula to
plt
(l−Wt )2
(T −t)
we get
Z l − W
s
=E
dWs .
T −s
t
and hence it is an F0 -martingale by Novikov’s condition. In this example, the conditional law of
G given Ft is even equivalent to the law of G for all t ∈ [0, T ). On the other hand, the conditional
law of WT given FT is the point mass in WT (ω) and therefore not absolutely continuous with
respect to the law of WT .
Example 1.3.2. Let G be a random variable with values in a countable set U such that
P[G = l] > 0 for all l ∈ U . Then every A ∈ σ(G) is of the form A = ∪l∈J G = l for some
J ⊆ U . therefore we have
Z
X l X P G ∈ A|Ft =
P G = l Ft =
pt P G = l =
plt P G ∈ dl
l∈J
l∈J
A
for all t ∈ [0, T ], where plt = P[G = l|Ft ]/P[G = l], and so the conditional law of G given
Ft is absolutely continuous with respect to the law of G for all t ∈ [0, T ]. Thus we obtain by
Theorem 1.1.12 and the Remark 1.1.13 that every local F-martingale is a G-semimartingale.
However, the conditional laws of G given Ft are equivalent to the law of G on Ft for t < T
only if P[G = l|Ft ] > 0 P-a.s. for all l ∈ U . Moreover, there is no equivalence on FT if G
is FT -measurable, because in this case P[G = l|FT ] = 1{G=l} is zero with positive probability
(unless G is a constant and equal to l).
As a special case, consider the situation in which G describes whether the endpoint of a
one-dimensional F-Brownian motion lies in some given interval, i.e., G := 1{WT ∈[a,b]} for some
Enlargement of Filtration & Insider Trading
9
a < b. Then we have
Ft
Ft
P
G
=
1
1
−
P
G
=
1
p1t =
and p0t =
,
P[G = 1]
1 − P[G = 1]
and for t ∈ [0, T ) we have
h
i
h
i
= 1|Ft = P WT ∈ [a, b]|Ft
P G = 1Ft = P 1
WT ∈[a,b]
h
i
= P WT − Wt + Wt ∈ [a, b]|Ft
h
i
= P WT − Wt ∈ [a − y, b − y] Wt =y
Z b
1
(l − Wt )2 =p
exp −
dl,
2(T − t)
2π(T − t) a
and
√
√
P[G = 1] = P G = 1F0 = Φ(b/ T ) − Φ(a/ T ),
where Φ is the standard normal distribution function. Hence, P G ∈ .Ft is absolutely continuous with respect to the law of G for t ∈ [0, T ] and equivalent to the law of G only for all
t ∈ [0, T ).
Example 1.3.3. Let G = WT + ε, where WT is the endpoint of a one dimensional (P, F)Brownian motion W and ε is a random variable independent of FT such that ε ∼ N (0, 1).
Then we have for all t ∈ [0, T ]
P G ∈ dlFt = P (WT − Wt + Wt + ε) ∈ dlFt
= P (WT − Wt + ε) ∈ (dl − y) y=Wt
(l − Wt )2 =p
exp −
dl
2(T − t + 1)
2π(T − t + 1)
= plt P (WT + ε) ∈ dl ,
1
where
r
plt =
T +1
(l − Wt )2
l2
exp −
+
,
T −t+1
2(T − t + 1) 2(T + 1)
l ∈ R,
is strictly positive for all t ∈ [0, T ]. Furthermore, applying Itô’s formula to
plt = E
Z
(l−Wt )2
(T −t+1)
we get
l − Ws
dWs ,
T −s+1
t
and hence it is a F-martingale by Novikov’s condition.In this example, the conditional law of G
given Ft is even equivalent to the law of G for all t ∈ [0, T ], the end point included.
Chapter 2
Martingale representation theorems
for I.E.F
2.1
Notations
Recall that we are working in a probability space (Ω, F, P) equipped with a filtration F =
(Ft )t∈[0,T ] satisfying the usual conditions. T > 0 is a fixed finite time horizon. We assume that
F0 is trivial and Fs = FT = F for all s ≥ T . For an F-measurable random variable G with
values in a Polish space (U, U), we define the initially enlarged filtration G = (Gt )t∈[0,T ] by
Gt = Ft ∨ σ(G), t ∈ [0, T ].
Let H = (Ht )t∈[0,T ] ∈ {F, G} be a generic filtration and R be a generic probability measure
on (Ω, HT ). The collection of uniformly integrable continuous (R, H)-martingales is denoted by
M(R, H). For p ∈ [1, ∞), Hp (R, H) denotes the set of the continuous (R, H)-martingales M
such that :
1
kM kHp (R,H) := E sup |Ms |p p < ∞.
s∈[0,T ]
The set of bounded continuous (R, H)-martingales is denoted by H∞ (R, H). For p ∈ [1, ∞),
Lp (M, R, H) denotes the space of d-dimensional H-predictable processes φ = (φ(1) , . . . , φ(d) ) such
that:
Z T
p R
2
kφkLp (M,R,H) := E
φ>
< ∞.
u d[M, M ]u φu
0
For a d-dimensional continuous (R, H)-semimartingale S, Lsm (S, R, H) denotes the set of ddimensional
H-predictable processes φ that are integrable with respect to S, in Rthe sense that
R >
Pd R i i
φ dS =
φi dS i is well
i=1 φ dS , where for every i = 1, . . . , d the stochastic integral
defined.
To emphasize the dependence of the stochastic integral on H, we shall then write
R
H- φ> dS, where φ ∈ Lsm (S, R, H).
Throughout this chapter we fix a d-dimensional continuous process S = (S (1) , . . . , S (d) )> ,
and we assume that there exists a probability measure QF ∼ P on (Ω, FT ) such that each
2 (QF , F). This assumption is motivated because as we shall see in
component of S is in Hloc
Section 3.1, the process S is not always a martingale in the real world, i.e. a (P, F)-martingale.
Let Z F be the density process of QF with respect to P.
10
Enlargement of Filtration & Insider Trading
2.2
11
The martingale preserving probability measure
In this section we define the martingale preserving probability measure and also shows how it
can be used to transfer properties of stochastic processes and structures from F to G. Recall
that in Chapter 1, we have used The following assumption in some results and will be imposed
in the remainder of this thesis.
Assumption 2.2.1. The regular conditional distributions of G given Ft are equivalent to the
law of G, i.e.:
P[G ∈ .|Ft ](ω) ∼ P[G ∈ .] for all t ∈ [0, T ] and P-a.a. ω ∈ Ω
(2.1)
Theorem 2.2.2. If the Assumption (2.2.1) is satisfied, then
1. Let QG be the measure defined by
G
Z
Q (A) :=
A
ZTF
dP for A ∈ GT
pG
T
(2.2)
has the following properties:
i. QG = QF on (Ω, FT ), and QG = P on (Ω, σ(G)), i.e. for AT ∈ FT and B ∈ U,
QG [AT ∩ {G ∈ B}] = QF [AT ]P[G ∈ B] = QG [AT ]QG [G ∈ B].
(2.3)
ii. The σ-algebras FT and σ(G) are independent under QG .
2. Z G :=
Proof.
ZF
pG
is a (P, G)-martingale.
1. To prove Equation (2.3), let AT ∈ FT and B ∈ U. By conditioning on FT , we get
h
i
ZF ZF
E 1AT ∩{G∈B} GT = E 1AT E 1{G∈B} GT |FT
pT
pT
Z
ZF
=
E 1{G∈B} GT |FT (ω)P(dω).
pT
AT
The definition of pG
T yields
Z
h
i
1
1
E 1{G∈B} G |FT (ω) =
plT (ω)P[G ∈ dl]
l
pT
p
(ω)
B T
Therefore
Z
Z
h
i
ZTF (ω) l
Z FT
E 1{G∈B} G |FT (ω) =
p
(ω)P[G
∈
dl]
=
ZTF (ω)P[G ∈ dl]
l (ω) T
pT
p
B T
B
Enlargement of Filtration & Insider Trading
12
and hence
h
h
i
ZF i
ZF
QG AT ∩ {G ∈ B} = E 1AT ∩{G∈B} GT = E 1AT E 1{G∈B} GT |FT
pT
pT
Z
F
Z
=
E I{G∈B} GT |FT (ω)P(dω)
pT
A
Z TZ
=
ZTF (ω)P[G ∈ dl]P(dω)
AT B
Z
Z
P[G ∈ dl]P(dω)
=
ZTF (ω)
AT
F
B
= Q [AT ]P[G ∈ B]
where in the last equality we used the fact that Z F is the density process of Q with respect
to P. Thus we get the first equality in Equation (2.3). The second follows by choosing
AT = Ω or B = U .
2. Now fix 0 ≤ s ≤ t ≤ T and choose A ∈ Gs of the form A = As ∩ {G ∈ B} with As ∈ Fs and
B ∈ U. Then we obtain by Equation (2.3), using the fact that Z F is a (P, F)-martingale
and by reversing the above argument that
ZF E 1A Gt = QF [As ]P[G ∈ B] = E 1As ZsF (ω)P[G ∈ B]
pt
Z Z
ZsF (ω) l
=
ps (ω)P[G ∈ dl]P(dω)
l
As B ps (ω)
h
ZF i
ZF
= E 1As E 1{G∈B} Gs |Fs = E 1A Gs .
ps
ps
Then arguing as in Proposition 1.1.14 , this extends to arbitrary sets A ∈ Gs . Hence the
F
ZF
process pZG is a (P, G)-martingale with pG0 = 1 because Z0F = 1 = pG
0 and so Equation (2.2)
0
defines indeed a probability measure on (Ω, Gt ).
The following theorem shows that the martingale property is preserved under an initial
enlargement of filtration and a simultaneous change to the measure QG .
Theorem 2.2.3. If the Assumption 2.2.1 is satisfied, then for all p ∈ [1, ∞]
p
p
p
H(loc)
QF , F = H(loc)
QG , F ⊆ H(loc)
QG , G
(2.4)
and in particular
M(loc) QF , F = M(loc) QG , F ⊆ M(loc) QG , G .
(2.5)
Proof. Let M be a (QF , F)-martingale we have then
G
G
G
F
EQ [Mt |Gs ] = EQ [Mt |Fs ∨ σ(G)] = EQ [Mt |Fs ] = EQ [Mt |Fs ] = Ms ,
where in the second equality we used independence of σ(G) and FT under QG and in the third
equality we used the equality of QG and QF on (Ω, FT ). Therefore M is a (QG , G)-martingale.
Enlargement of Filtration & Insider Trading
13
Since F-stopping times are also G-stopping times, any localizing sequence (τn )n for a process M
with respect to (QF , F) will then also be a localizing sequence for the process M with respect to
(QG , F) and (QG , G). The integrability properties in Equations (2.4) and (2.5) follow from the
equality of QG and QF on (Ω, FT ).
Remark 2.2.4. Any (QF , F)-Brownian motion W is a (QG , G)-Brownian motion. Indeed, Since
the quadratic variation of continuous martingales can be computed pathwise without involving
the filtration and since QG = QF on (Ω, FT ), we obtain for all t ∈ [0, T ] that
(QG ,G)
hW it
(QG ,F)
= hW it
(QF ,F)
= hW it
=t
and therefore W is also a (QG , G)-Brownian motion, by Lévy’s characterization theorem.
Definition 2.2.5. The probability measure QG defined by Equation (2.2) on (Ω, GT ), is called
the martingale preserving probability measure under initial enlargement of filtration. This terminology is justified by Theorem 2.2.3.
Using the decoupling property of QG , the following proposition shows that G inherits the
right-continuity from F.
Proposition 2.2.6. If the Assumption 2.2.1 is satisfied, then G is right-continuous.
T
Proof. Define Gs+ := >0 Gs+ for s ∈ [0, T ]. Fix t ∈ [0, T ) and δ ∈ (0, T − t). Because of the
independence of Ft+δ and σ(G) under QG , and using Theorem A.2.3, it is enough to show that
for Gt+δ -measurable random variables Yt+δ of the form Yt+δ = h(G)Ht+δ , where h is a bounded
U-measurable function and Ht+δ is a bounded Ft+δ -measurable random variable, we have
G
G
EQ [Yt+δ |Gt+ ] = EQ [Yt+δ |Gt ].
For all ∈ (0, δ), we get
G
G
EQ [Yt+δ |Gt+ ] = h(G)EQ [Ht+δ |Gt+ ]
G
G
= h(G)EQ EQ [Ht+δ |Ft+ ∨ σ(G)]|Gt+
G
G
= h(G)EQ EQ [Ht+δ |Ft+ ]|Gt+ ,
(2.6)
since Ht+δ and Ft+ are independent of G under QG . And because of the right-continuity of F,
we can always choose right-continuous versions of F-martingales. This implies
G
G
lim EQ [Ht+δ |Ft+ ] = EQ [Ht+δ |Ft ],
&0
and since Ht+δ is bounded, passing in Equation (2.6) to the limit as decreases to 0 and applying
the dominated converging theorem, we obtain
G
G
G
G
EQ [Yt+δ |Gt+ ] = h(G)EQ EQ [Ht+δ |Ft |Gt+ = h(G)EQ [Ht+δ |Ft ]
G
G
= h(G)EQ [Ht+δ |Gt ] = EQ [Yt+δ |Gt ],
because Ht+δ and Ft are independent of G under QG .
Enlargement of Filtration & Insider Trading
14
In particular, we have for all Gt+ -measurable random variables X that
G
G
X = EQ [X|Gt+ ] = EQ [X|Gt ]
QG − a.s.
Since QG ∼ P and G0 contains all (P, F)-null sets, X is therefore Gt -measurable. This completes
the proof.
The following proposition shows that the stochastic integrals defined under F remain unchanged under an initial enlargement that satisfies Assumption 2.2.1.
Proposition 2.2.7. If the Assumption 2.2.1 is satisfied. For a d-dimensional (QF , F)- semimartingale Y , the following equalities then hold:
Lsm (Y, QF , F) = Lsm (Y, QG , F)
(2.7)
= {ϑ : ϑ is F-predictable and ϑ ∈ Lsm (Y, QG , G)}
(2.8)
R
R
and for ϑ ∈ Lsm (Y, QF , F) the stochastic integrals F- ϑdY and G- ϑdY have a common version.
Proof. Because QF = QG on (Ω, FT ), we get then the first equality.
For the second equality we have by Theorem 2.2.3, the (QG , F)-semimartingale Y is also a
G
(Q , G)-semimartingale and thus
Lsm (Y, QG , F) ⊇ {ϑ : ϑ is F-predictable and ϑ ∈ Lsm (Y, QG , G)}
by Théorème 7 of Jacod [8], see Theorem A.2.1 in the Appendix. For the other inclusion, let
ϑ ∈ Lsm (Y, QG , F), i.e. there exists a local (QG , F)-martingale M and an F-adapted Rprocess
A of finite variation such that Y = M + A, and such that ϑ ∈ Lloc (M, QG , F) and ϑ> dA
exists. By Theorem 2.2.3, M is a (QG , F)-martingale. Since F ⊆ G, the process A is Gadapted. Therefore Y = M +A is also a (QG , G)-semimartingale decomposition. And since M ∈
Mloc (QG , F) ∩ Mloc (QG , G), Corollaire 9.21 of Jacod
[7], see corollary A.2.2 in the Appendix,
R
implies that ϑ ∈ Lloc (M, QG , G), and since the ϑ> dA can be computed pathwise without
involving the filtrations, we get that ϑ ∈ Lsm (Y, QG , G) thus the proof is complete.
2.3
Martingale representation theorems for I.E.F
In this section we transfer martingale representation theorems from F to the initially enlarged
filtration G. For these purpose we suppose throughout this section that the following represen2 (QF , F):
tation property holds with respect to S ∈ Hloc
Assumption 2.3.1. For any ψ ∈ L∞ (FT ), there exists φ ∈ L2 (S, QF , F) such that
Z T
QF
ψ = E [ψ] +
φ>
s dSs .
0
Remark 2.3.2. By Theorem 13.4 of He, Wang and Yan [6], see Theorem A.2.5 in the Appendix,
the assumption above is equivalent to the representation property of a local (QF , F)-martingale.
That Ris for every local (QF , F)-martingale M there exists a φ ∈ L1loc (S, QF , F) such that K =
K0 + φ> dS
Enlargement of Filtration & Insider Trading
15
Theorem 2.3.3. Suppose Assumptions 2.2.1 and 2.3.1 are satisfied.
1. For any M ∈ H2 (QG , G), there exists a process ψ ∈ L2 (S, QG , G) such that
Z t
ψs> dSs ,
t ∈ [0, T ].
M t = M0 +
0
2. For any M ∈ Mloc (QG , G), there exists a process ψ ∈ L1 (S, QG , G) such that
Z t
ψs> dSs ,
t ∈ [0, T ].
Mt = M0 +
0
Proof. To prove the first claim it is sufficient to show that any random variable X ∈ L2 (QG , GT )
can be written in the form
Z
T
G
X = EQ [X|G0 ] +
ψs> dSs ,
0
for some ψ ∈ L2 (S, QG , G). Since GT = FT ∨ σ(G), Theorem IV. 3.5.1 of Malliavin [11], see
Theorem A.2.3 in the Appendix, implies that the vector subspace of L∞ (GT ) defined by
∞
V = X ∈ L (GT ) : X =
m
X
Ii Ji , with Ii ∈ L∞ (FT ), Ji ∈ L∞ (σ(G)), m ∈ N
i=1
P n
is dense in L2 (QG , GT ). Thus there exists a sequence (Xn )n∈N = ( m
i=1 Ii,n Ji,n )n∈N in V , with
∞
∞
Ii,n ∈ L (FT ) and Ji,n ∈ L (σ(G)), such that Xn converges to X in L2 (QG ). By Assumption
2.3.1, and the fact that QG = QF on (Ω, FT ) (Theorem 2.2.2), then there exists a sequence
(φi,n )n∈N ∈ L2 (S, QG , F) such that
Z T
G
>
Ii,n = EQ [Ii,n ] +
(φi,n
(2.9)
s ) dSs .
0
Since S is a local (QG , F)-martingale and thus a local (QG , G)-martingale
by Theorem 2.2.3,
RT
> dS is not changed
)
Proposition 2.2.7 implies that the value of the stochastic integral 0 (φi,n
s
s
when it is considered under G. Because Ji,n is bounded and G0 -measurable since
G0 = F0 ∨ σ(G) = σ(G)
because F0 is trivial, we have
Z
T
Ji,n
>
(φi,n
s ) dSs
Z
=
0
T
>
Ji,n (φi,n
s ) dSs .
(2.10)
G
(2.11)
0
The independence of FT and G0 under QG yields
G
G
EQ [Ii,n Ji,n |G0 ] = Ji,n EQ [Ii,n |G0 ] = Ji,n EQ [Ii,n ]
By Equations (2.9), (2.10) and (2.11) we then obtain
Xn =
mn
X
i=1
G
Ji,n EQ [Ii,n ] +
Z
0
T
>
QG
(φi,n
[Xn |G0 ] +
s ) dSs = E
Z
0
T
(ψsn )> dSs ,
(2.12)
Enlargement of Filtration & Insider Trading
16
P n
i,n is in L2 (S, QG , G) due to the boundedness of J
i,n is
where ψ n := m
i,n and since φ
i=1 gi,n φ
G
G
in L2 (S, QG , G). Since Xn converges to X in L2 (QGR), EQ [Xn |G0 ] converges to EQ [X|G0 ] in
T
L2 (S, QG , G) and thus Equation (2.12) yields that 0 (ψsn )> dSs converges Rin L2 (QG ) as well.
2 (QG , G), and since the mapping ϑ 7→
Since each component of S is in Hloc
ϑ> dS is an isom2 (QG , G), k.k
etry from (L2 (S, QG , G), k.kL2 (S,QG ,G) ) to (Hloc
2 (QG ,G) ), the space of stochastic
Hloc
integrals
o
nZ T
2
G
ϑ>
dS
:
ϑ
∈
L
(S,
Q
,
G)
,
s
s
0
L2 (QG ).
is
in
This
the existence of a process ψ ∈ L2 (S, QG , G) such that
R T closed
R T implies
n
>
>
2
G
0 (ψs ) dSs converges to 0 ψs dSs in L (Q ). Hence Equation (2.12) yields
2
X = L − lim
n→∞
QG
E
T
Z
[Xn |G0 ] +
(ψsn )> dSs
QG
=E
Z
T
[X|G0 ] +
ψs> dSs ,
0
0
and thus the first claim. For the proof of the second part of the theorem, we will proceed in
steps. For convenience we will denote by Gt the filtration (Gs )s∈[0,t] for all t ∈ [0, T ] and of
course we have GT = G.
1. Let t ∈ [0, T ) and K ∈ H01 (QG , Gt ), i.e. K ∈ H1 (QG , Gt ) and K0 = 0. The filtration Gt
is right-continuous by Proposition 2.2.6. Then Theorem 10.5 of He, Wang and Yan [6]
implies that H02 (QG , Gt ) is dense in H01 (QG , Gt ), thus there exists a sequence (K n )n≥0 in
H02 (QG , Gt ) such that limn→∞ ||K n − K||H01 (QG ,Gt ) = 0. For all n ≥ 0, the first part of the
theorem yields the existence of ψ n ∈ L2 (S, QG , Gt ) such that
Ktn
t
Z
=
(ψsn )> dSs ,
t ∈ [0, T ).
(2.13)
0
R >
Since K n is in L1 (S) :=
ϑ dS : ϑ ∈ L1 (S, QG , Gt ) , and since L1 (S) is closed in
H01 (QG , Gt ) by Theorem 4.60 of Jacod [7], we conclude that K ∈ L1 (S).
2. Let K ∈ H01 (QG , G). By part 1, on the interval [0, T ) K is of the form
t
Z
Kt =
ψs> dSs ,
t ∈ [0, T ),
(2.14)
0
where ψ is Gt -predictable, t ∈ [0, T ), and for all t ∈ [0, T ),
QG
E
h Z
t
ψs> dhSis ψs
1/2 i
< ∞.
0
We now extend Equation (2.14) to the interval [0, T ]. Since for all t ∈ [0, T ), the filtration
Gt is right-continuous, the Burkholder-Davis-Gundy inequalities imply for t ∈ [0, T )
QG
E
h Z
0
t
ψs> dhSis ψs
1/2 i
h
i
h
i
G
G
1/2
= EQ hKit
≤ C EQ
sup |Ks |
0≤s≤t
h
i
G
≤ C EQ
sup |Ks | ,
0≤s≤T
Enlargement of Filtration & Insider Trading
17
where C is a positive constant. Hence by monotone convergence we obtain
QG
E
h Z
T
ψs> dhSis ψs
1/2 i
≤ C EQ
G
because K ∈ H01 (QG , G). Hence
implies that for all t ∈ [0, T )
i
sup |Ks | ≤ ∞,
0≤s≤T
0
R
h
ψ > dS is a (QG , G)-martingale, and Equation (2.14)
QG
Kt = E
T
hZ
0
i
ψs> dSs Gt .
Since K ∈ H01 (QG , G) and limt→T ||Kt − KT ||L1 = 0, by martingale convergence we obtain
RT
KT = 0 ψs> dSs .
3. Now let K ∈ M0,loc (QG , G), i.e. there is a sequence of G-stopping times (σn )n∈N such
that for each n ∈ N, K σn ∈ M0 (QG , G). Since for all t ∈ [0, T ), Gt is right-continuous,
we have then that for all n ∈ N, τn := σn ∧ inf{t : |Kt | ≥ n} ∧ T is a G-stopping time.
Hence supt∈[0,T ] |Ktτn | ≤ n + |Kτn |, hence K τn ∈ H01 (QG , G). Therefore part 2 yields the
R
existence of ψ n ∈ L1 (S, QG , G) such that K τn = (ψ n )>RdS. With τ0 = 0, we get that
P
G
n
1
ψ > dS.
ψ := ∞
n=1 ψ 1]τn−1 ,τn ] is in Lloc (S, Q , G) and that K =
We will make the following assumption on S which we will need for the prove of a martingale
representation theorem with respect to G and the original probability measure P.
Assumption 2.3.4. The (P, F)-semimartingale S is continuous and can be written as
Z
S = M + dhM iα,
(2.15)
where M is a d-dimensional continuous local (P, F)-martingale and α is a d-dimensional process
in L1 (M, P, F).
Application of Theorem 1.1.12 and Lemma 1.2.1 to the d-dimensional continuous local (P, F)martingale M from Assumption 2.3.4 yields the existence of a P(F) ⊗ U-measurable function
(ω, t, l) 7→ κlt (ω) such that
Z
M̃ := M −
dhM iκG
is a d-dimensional continuous local (P, G)-martingale. And we need the following assumption
on the integrability of κ.
Assumption 2.3.5. For all l ∈ U the process κl ∈ L1loc (M, P, F).
Enlargement of Filtration & Insider Trading
18
1. If Assumptions 2.3.1 and 2.3.4 are satisfied, then for t ∈ [0, T ]
Z
ZtF = E − αs> dMs .
(2.16)
Lemma 2.3.6.
t
2. If Assumptions 2.2.1, 2.3.1 and 2.3.4 are satisfied then for t ∈ [0, T ]
Z
1
G >
=
E
−
(κ
)
d
M̃
s ,
s
t
pG
t
ZtG =
ZtF
=
E
−
pG
t
Z
>
(αs + κG
)
d
M̃
s .
s
(2.17)
(2.18)
t
Proof. Since Z1F is a strictly positive (QF , F)-martingale, there exists a local (QF , F)-martingale
L with L0 = 0 such that Z1F = E(L). By Assumption 2.3.1 and Remark 2.3.2, there exists a
process φ ∈ L1loc (S, QF , F) such that
Z
1
= E( φ> dS).
ZF
Then by Assumption 2.3.4, the density process Z F can be written as
Z
Z
n Z
o
1
F
>
>
Z = exp − φ dM − φ dhM iα +
φ> dhM iα ,
2
(2.19)
hence Itô’s formula implies that for i = 1, . . . , d
d(Z F S i ) = Z F dS i + S i dZ F + dhZ F , S i i
F
i
F
i
i
F
F
i
= Z dM + Z (dhM iα) + S dZ + Z dhM , −
i
= Z F dM i + S i dZ F + Z F dhM i(α − φ) .
Z
φ> dM i
R
Since Z F S i , Z F dM i and S i dZ F are continuous local (P, G)-martingales, Z F dhM i(α − φ) is a
continuous
of finite
thus vanishes. And since Z F > 0, we
R local (P, G)-martingale
R
R > variation
R and
>
get that dhM iα = dhM iφ and so α dM = φ dM . By Equation (2.19), we get then
Equation (2.16).
To prove Equation 2.17, let l ∈ U . Since pl is a strictly positive (P, F)-martingale by
Lemma 1.1.10 and Remark 1.1.11 (we have F instead of F0 because we have equivalence instead of absolute continuity), whence pl /Z F is a strictly positive (QF , F)-martingale. Because of
Assumption 2.3.1 there exists a process φl ∈ L1loc (S, QF , F) such that
Z
pl
= E( (φl )> dS).
ZF
Enlargement of Filtration & Insider Trading
19
By Assumption 2.3.4 and the first claim we have
Z
Z
pl F
l >
l
(φ ) dS E − α> dM
p = FZ = E
Z
Z
Z
Z
l >
>
=E
(φ ) dS − α dM − (φl )> dhM iα
Z
Z
Z
Z
l >
l >
>
(φ ) dM + (φ ) dhM iα − α dM − (φl )> dhM iα
=E
Z
=E
(φl − α)> dM .
On the other hand, because of Assumptions 2.2.1, 2.3.5 and part 2 of Proposition 1.2.4 can be
applied to get
Z
l
(κl )> dM + N l ,
p =E
where N l is a local (P, F)-martingale with N0l = 0 and orthogonal to M . the uniqueness of the
stochastic exponential thus implies that
Z
Z
l
>
(φ − α) dM = (κl )> dM + N l .
(2.20)
Now taking the covariation of both sides of Equation (2.20) with respect to N l , we get by the
orthogonality of M and N l that hN l i = 0. By Assumption 2.3.4 and Equation (2.20) the process
N l is continuous. Hence, N l is a continuous local (P, F)-martingale of finite variation and thus
vanishes. Therefore we obtain that
Z
l
p =E
(κl )> dM .
Now part 2 of corollary 1.2.6 yields Equation (2.17). By Theorem 2.2.2 we have that Z G = Z F /pG
is a (P, F)-martingale, hence
Z
Z
ZF
>
Z G = G = E − αs> dMs E − (κG
)
d
M̃
s
s
p
t
t
Z
Z
Z
Z
>
>
G >
= E − αs> dMs − (κG
)
d
M̃
+
h−
α
dM
,
−
(κ
)
d
M̃
i
s
s
s
s
s
s
t
t
Z
Z
Z
>
= E − αs> dMs − (κG
αs> dhM is κG
s ) dM̃s +
s
t
Z
Z
>
= E − αs> (dMs + dhM is κG
(κG
s )−
s ) dM̃s
t
Z
Z
>
= E − αs> dM̃s − (κG
s ) dM̃s
t
Z
>
= E − (αs + κG
s ) dM̃s .
t
Then the proof is complete.
We now show a martingale representation theorem for local (P, G)-martingales with respect
to the continuous (P, G)-martingale M̃ .
Enlargement of Filtration & Insider Trading
20
Theorem 2.3.7. Suppose Assumptions 2.2.1, 2.3.1, 2.3.4 and 2.3.5 are satisfied. For any
K ∈ Mloc (P, G), there exists then φ̃ ∈ L1loc (M̃ , P, G) such that
Z
Kt = K0 +
t
φ̃>
s dM̃s ,
t ∈ [0, T ].
(2.21)
0
Proof. Since K ∈ Mloc (P, G), we have that ZKG ∈ Mloc (QG , G), hence Theorem 2.3.3 implies
that for all t ∈ [0, T ],
Z t
K
φ>
=
K
+
0
s dSs
ZG
0
for some φ ∈ L1loc (S, QG , G). Now applying Itô’s formula and using the fact that
Z
Z
G
G
G >
S = M̃ + dhM i(α + κ ) and Z = E − (αs + κs ) dM̃s ,
t
we get for t ∈ [0, T ]
K K K K
G
G
dKt = d Z G G =
dZ
+
Z
d
+ dhZ G , G it
t
t
G
G
Z t
Z t
Z t
Z
Z t
Z
>
G
G
= ZtG φ>
dS
+
K
+
φ
dS
dZ
+
dhZ
,
φ> dSit
t
0
s
t
s
t
0
G >
G
= ZtG φ>
t dM̃t + Zt φt dhM it (α + κ )t −
Z t
G >
− ZtG K0 +
φ>
dS
s (α + κ )t dM̃t −
s
0
G
− ZtG φ>
t dhM it (α + κ )t
Z t
>
G
G
= Zt φt − K0 +
φ>
dM̃t .
s dSs (α + κ )t
0
By setting φ̃ := ZtG φ − K0 +
> dS (α + κG ) , we therefore obtain Equation (2.21). The
φ
s
t
0 s
Rt
integrability property of φ̃ is a consequence of the integrability of φ, the continuity of Z G and S
and the integrability assumptions on α and κG .
Chapter 3
Insider trading and utility
maximization
In this chapter we consider two types of investors on different information levels trading in
a general continuous-time security market. The market is described by the given probability
space (Ω, F, P) with the filtration F = (Ft )t∈[0,R] satisfying the usual conditions and we assume
also that F0 is trivial. The price process S = {St , t ∈ [0, R]} that models the stock price is
assumed to be a positive continuous (P, F)-semimartingale. While the ordinary investor, who
has as information at time t all the Ft -measurable random variables, makes his decisions based
on these random variables, the insider investor observes the same process S but he/she holds as
information a bigger filtration G then the ordinary investor (that is, Ft ⊂ Gt , t ∈ [0, R]). The
additional information of the insider could be for example the knowledge at time t = 0 of the
outcome of some F-measurable random variable G. For instance, G might be the price of S
at time t = R, or the value of some external source of uncertainty, etc. As in the preceding
chapter G is an F-measurable random variable with values in some Polish space (U, U), and
then the insider information is modelled by the initially enlarged filtration G = (Gt )t∈[0,R] with
Gt = Ft ∨ σ(G), t ∈ [0, R].
We fix a time T ∈ (0, R], and we assume that the financial market S on the time interval
[0, T ] is arbitrage free and complete for the ordinary investor in the following sense: there exists
a unique probability measure QF equivalent to P on ((Ω, FT ) such that S is a local (QF , F)martingale (arbitrage free), and any bounded FT -measurable random variable can be written as
a some of a constant and a stochastic integral with respect to S (completeness), i.e. Assumption 2.3.1. Furthermore we suppose that the random variable G satisfies Assumption 2.2.1.
Remark 3.0.8.
1. The intuitive meaning of Assumption 2.2.1 is that at all times up to time T the insider
has an informational advantage over the ordinary investor consisting in the knowledge of
all the outcomes of G as possible before and at time T . For the public the outcome of G is
revealed only after time T . As we have seen in Example 1.3.3, if G contains a noise term
that is independent from FR , then we can choose T = R. For the other Examples 1.3.1
and 1.3.2 we can choose any T < R.
2. Assumption 2.2.1 combined with the existence of an equivalent local F-martingale measure
21
Enlargement of Filtration & Insider Trading
22
for S ensures the existence of an equivalent local G-martingale measure for S, i.e. under
the probability measure QG given in Theorem 2.2.2 with density process Z G . The price
process S is then by Theorem 2.2.3 a local (QG , G)-martingale, moreover by Theorem 2.3.3
any local (QG , G)-martingale can be written as a sum of a G0 -measurable random variable
and a stochastic integral with respect to S. Whence Assumption 2.2.1 is sufficient to place
the insider in a complete market free of arbitrage.
3.1
The ordinary investor problem
In this section we will give an exposition of the problem for the ordinary investor in the BlackScholes framework. A general model will be treated in the next section.
As in the introduction of this chapter we suppose that the market is described by the given
probability space (Ω, F, P) and we consider a Brownian motion W = {Wt ; 0 ≤ t ≤ T } defined
on (Ω, F, P) and we denote by F = (Ft )t∈[0,T ] the natural filtration generated by W and the
(P, F)-sets of measure zero. The price process S = {St ; t ∈ [0, T ]} that models the stock
price is assumed to be a positive continuous (P, F)-semimartingale evolving according to the
stochastic equation
dSt = St µdt + σdWt ,
t ∈ [0, T ],
(3.1)
with S0 > 0, and µ is a constant and σ a strictly positive constant. Besides we denote by
B = {Bt , t ∈ [0, T ]} the risk free asset, and we suppose that it evolves according to the stochastic
differential equation, for given positive constant r,
t ∈ [0, T ] and B0 = 1.
dBt = Bt rdt,
(3.2)
Before we continue, we need some definitions concerning trading strategies, self-financing strategies.
Definition 3.1.1. A trading strategy (portfolio) is a two-dimensional predictable, locally bounded
process π = {πt = (φt , ψt ), t ∈ [0, T ]} with values in R2 .
RT
RT
Remark 3.1.2. The conditions on π ensure that the stochastic integrals 0 φt dBt and 0 ψt dSt
are well defined. Where φt denotes the money that the investor invests in the riskless asset, and
ψ denotes the number of stocks held in the portfolio at time t.
Definition 3.1.3. Let π be a trading strategy.
1. The value of the portfolio π at time t is given by
Vt = Vtπ = φt Bt + ψt St .
The process Vtπ is called the value process, or the wealth process, of the trading strategy
π.
2. The gains process denoted by Gπt is defined by
Z t
Z t
π
Gt =
φs dBs +
ψs dSs .
0
0
Enlargement of Filtration & Insider Trading
23
3. A trading strategy π is called self-financing if the wealth process Vtπ satisfies
Vtπ = V0π + Gπt for all t ∈ [0, T ].
Now we return to our model and we observe that the Equations (3.1) and (3.2) have a unique
solutions and by Itô’s formula we have that the solutions are given by
Bt = exp(rt),
1
St = S0 exp (µ − σ 2 )t + σWt .
2
(3.3)
(3.4)
The process modelling the stock price is not a F-martingale under the measure P. Therefore we
need another measure equivalent to P under which the process S is an F-martingale. We define
the process S̃ by S̃t = St /Bt = e−rt St , the process S̃ is called the discounted stock process. By
Itô’s formula and Equation (3.1) we have
1
1
dSt + St d
Bt
B t
−rt
= e St µdt + σdWt + St (−re−rt )dt
µ − r
= St
σe−rt dt + σe−rt dWt
σ
µ − r
dt + dWt
dS̃t = σ S̃t
σ
dS̃t =
(3.5)
(3.6)
Under P the process W is a Brownian motion and hence the process S̃ is a local (P, F)-martingale
if and only if µ ≡ r. But this will rarely be the case in the real world.
On the other hand, under another equivalent measure P̃ the discounted stock process S̃ may
well be an F-martingale, if viewed as a process on the filtered space (Ω, F, F, P̃). Because the
drift term in Equation (3.5) causes the problem, we could first rewrite the equation as
dS̃t = σe−rt St dW̃t ,
(3.7)
for the process W̃ defined by
W̃t = Wt − αt.
Then we need to find a Rmeasure under which the process W̃ is a martingale. By Novikov’s
condition the process E( αdW ) for αt = µ−r
σ Ris a P-martingale, with mean 1. Therefore, we
can define a probability measure P̃ by dP̃ = E( αdW )T dP. By Girsanov’s theorem the process
W̃ is then a P̃-Brownian motion (for the time parameter restricted to the interval [0, T ], and
hence a P̃-martingale. And we have also that the Brownian motion possesses the martingale
representation property. Therefore we are working in an arbitrage free complete market.
3.2
Utility Maximization
The underlying principle for modelling economic behavior of investors (or economic agents in
general) is the maximization of expected utility, that is one assumes that agents have a utility
function U (.) and base their economic decisions on expected utility considerations.
Enlargement of Filtration & Insider Trading
24
Definition 3.2.1. A function U : (0, ∞) → R is called a utility function, if
1. it is strictly concave, strictly increasing and continuously differentiable, and
2. U 0 (0+ ) = limt→0+ U 0 (t) = ∞ and U 0 (∞) = limt→∞ U 0 (t) = 0 ( called Inada Conditions).
Example 3.2.2. As examples of a utility function we have
1. u(x) = log x,
2. u(x) = xα ,
0 < α < 1.
Since we will work a lot with exponential processes, we will assume a logarithmic utility
function because it enables us to obtain explicit formulae.
3.2.1
The model
We return now to our setup as introduced in the beginning of his chapter. We will work with the
one dimensional case. We fix a continuous local F-martingale M with M0 = 0 and a predictable
process α with
hZ T
i
2
E
αt dhM it < ∞.
(3.8)
0
The discounted price process of the stock denoted again by S is assumed to evolve according to
the stochastic differential equation
dSt = St dMt + αt dhM it ,
t ∈ [0, T ],
(3.9)
with S0 > 0. Using the Itô’s formula we get
Z
Z t
1
St = S0 E M + αdhM i = S0 exp
(αs − )dhM is + Mt
2
t
0
(3.10)
By Theorem 1.1.12, and Lemma 1.2.1 we have that M is a G-semimartingale, and the local
G-martingale M̃ in its canonical G-decomposition has the form
Z t
M̃t = Mt −
κG
t ∈ [0, T ],
(3.11)
s dhM is ,
0
with κ = (κlt ) is a P(F) ⊗ U-measurable process. And so the discounted stock price evolution
from the insider’s point of view is
dSt = St dM̃t + (κG + αt )dhM it
Using the Itô’s formula we get
Z
St = S0 E M̃ + (α + κG )dhM i
t
Z t
1
= S0 exp
(αs + κG
−
)dhM
i
+
M̃
s
t
s
2
0
(3.12)
Enlargement of Filtration & Insider Trading
25
Definition 3.2.3. Let x > 0 and denote by H ∈ {F, G} a generic filtration.
1. An H-portfolio process is an R-valued and H-adapted predictable process π = (πt )t∈[0,T ]
RT
such that 0 πt2 dhM it < ∞ P-a.s.
2. For an H-portfolio process π, the discounted wealth (value) process denoted by V (x, π) is
defined by V0 (x, π) = x and satisfies
dVt (x, π) := πt dSt := ψt Vt (x, π)
dSt
for t ∈ [0, T ].
St
3. The class of admissible H-portfolio processes up is defined by
n
o
AH (x, T ) := π : π is an H-portfolio process and E log− VT (x, π) < ∞ ,
(3.13)
(3.14)
where log− x = max{0, − log x}.
Remark 3.2.4. The process ψt describes the proportion of total wealth at time t invested is the
risky asset S, and Equation (3.13) is the well known self-financing condition. For convenience,
we will consider the process ψ, but we will always keep in our mind that the portfolio is the
process π, and we obtain ψ by introducing the change of variables ψt = πt SVtt . And from now on
we will denote the discounted wealth process by Vt (x, ψ) instead of Vt (x, π).
By Itô’s formula, for a trading strategy ψ ∈ AH (x, T ) with x > 0, the wealth process is
strictly positive and explicitly given by
Z
Z
Z
dSs Vt (x, ψ) = x E
ψs
= xE
ψs dMs + ψs αs dhM is
(3.15)
Ss t
t
for t ∈ [0, T ]. From the insider’s point of view this can also be written, similar to Equation (3.12),
as
Z
Z
Vt (x, ψ) = x E
ψs dM̃s + ψs (κG
t ∈ [0, T ].
(3.16)
s + αs )dhM is ,
t
Definition 3.2.5. (Optimization Problems). Let the initial wealth x > 0.
1. The ordinary investor’s optimization problem is to solve:
max E log VT (x, ψ) .
ψ∈AF (x,T )
2. The insider’s optimization problem is to solve:
max E log VT (x, ψ) .
ψ∈AG (x,T )
Enlargement of Filtration & Insider Trading
3.2.2
26
Solution of the logarithmic utility maximization problem
Let us first work out the expression log VT (x, ψ) for ψ ∈ AG (x, T ) and x > 0, Equation (3.16)
then gives
Z
T
log VT (x, ψ) = log x +
Z
T
ψt dM̃t +
0
Z
0
T
Z
ψt dM̃t +
= log x +
0
0
Z
1 T 2
ψ dhM it
+ αt )dhM it −
2 0 t
1
ψt (κG
t + αt − ψt )dhM it
2
ψt (κG
t
T
T
Z
1 T G
ψt dM̃t +
= log x +
(κt + αt )2 dhM it −
2
0
0
Z
1 T G
(κt + αt − ψt )2 dhM it .
(3.17)
−
2 0
hR
i
R
T
Now if we had E 0 ψt2 dhM it < ∞, the local G-martingale ψt dM̃t would be a true martingale
Z
and hence would have expectation zero. Then ψt = κG
t +αt , t ≤ T , would be an optimal strategy
for the insider up to time T , yielding a maximal expected logarithmic utility up to time T of
1 log x + E
2
Z
T
2
(κG
t + αt ) dhM it .
0
Setting κG ≡ 0, and of course M = M̃ we get the optimal strategy and maximal expected
logarithmic utility for the ordinary investor.
Using the connection between the martingale density processes Z F and Z G and the logarithmic optimization problem, the solution of the optimization problems is of the above form. But
first we give the following proposition.
Proposition 3.2.6.
1. The processes Z F S and Z F V (x, φ) with φ ∈ AF (x, T ) and x > 0 are local (P, F)-martingales
on [0, T ].
2. The processes Z G S and Z G V (x, ψ) with ψ ∈ AG (x, T ) and x > 0 and x > 0 are local
(P, F)-martingales on [0, T ].
Proof. We will give the prove for the second claim only as the proof of the first one is just the
same by taking F instead of G and setting κG ≡ 0. Then using Itô’s formula we get
d Z G S t = St dZtG + ZtG dSt + dhZ G , Sit
= −ZtF St αt + κG dM̃t + ZtG St dM̃t + (αt + κG
t )dhM it +
Z
D Z
E
+ ZtG St d − (α + κG )dM̃ , M̃ + (α + κG )dhM i
t
G
= −ZtG St (αt + κG
t )dM̃t + Zt St dM̃t +
F
G
F
+ (αt + κG
t )Zt St dhM it − (αt + κt )Zt St dhM it
= ZtF St (1 − (αt + κG
t ))dM̃t ,
Enlargement of Filtration & Insider Trading
27
thus Z G S is a local G-martingale. Using Itô’s formula again we get
d Z G V (x, ψ) t = Vt (x, ψ)dZtG + ZtG dVt (x, ψ) + dhZ G , V (x, ψ)it
dSt
G
= −ZtG Vt (x, ψ)(αt + κG
+ dhZ G , V (x, ψ)it
t )dM̃t + Zt ψt Vt (x, ψ)
St
F
G
= −ZtG Vt (x, ψ)(αt + κG
t )dM̃t + Zt Vt (x, ψ)ψt dM̃t + (αt + κt )dhM it +
Z
Z
D Z
E
G
+ ZtG Vt (x, ψ)d − (αt + κG
)d
M̃
,
ψd
M̃
+
(α
+
κ
)ψdhM
i
t
t
t
t
G
= −ZtG Vt (x, ψ)(αt + κG
t )dM̃t + Zt Vt (x, ψ)ψt dM̃t +
G
+ ZtG Vt (x, ψ)ψt (αt + κG
t )dhM it − Zt Vt (x, ψ)ψt (αt
= ZtG Vt (x, ψ) ψt − (αt + κG
t ) dM̃t ,
+ κG
t )dhM it
whence the process ZtG Vt (x, ψ) is a local G-martingale.
The next theorem gives explicit solutions for the two optimization problems.
Theorem 3.2.7.
1. An optimal strategy up to time T for the ordinary investor is given by
φord
:= αt ,
t
t ∈ [0, T ],
and the corresponding maximal expected logarithmic utility up to time T is
Z
1 T 2
ord
αt dhM it .
E log VT (x, φ ) = log x + E
2
0
(3.18)
(3.19)
2. An optimal strategy up to time T for the insider is given by
ψtins := αt + κG
t ,
t ∈ [0, T ],
and the corresponding maximal expected logarithmic utility up to time T is
Z
1 T 2
2
E log VT (x, ψ ins ) = log x + E
(αt + (κG
t ) )dhM it .
2
0
(3.20)
(3.21)
Proof. We prove only the second part of the theorem because the first claim is a copy of the
second by taking Z F instead of Z G and setting κG ≡ 0.
Let ψ ∈ AG (x, T ) be fixed. We will use the following well-known inequality, for concave
0
1
C -function f such that f has an inverse g, we have then for all a, b
f (a) ≤ f (g(b)) − b(g(b) − a).
Whence setting f (a) = log a, g(b) = 1b , a = VT (x, ψ) and b = yZTG for some constant y > 0 we
obtain
log(VT (x, ψ)) ≤ log(
1
1
) − yZTG ( G − VT (x, ψ))
G
yZT
yZT
= − log y − log ZTG − 1 + yZTG VT (x, ψ).
Enlargement of Filtration & Insider Trading
28
By Proposition 3.2.6, Z G V (x, ψ) is a local G-martingale, and is non-negative because V (x, ψ)
and Z G are nonnegative, hence Z G V (x, ψ) is a G-supermartingale starting in x because Z0G = 1
and V0 (x, ψ) = x, whence
E log VT (x, ψ ins ) ≤ −1 − log y − E log ZTG + xy
(3.22)
for all ψ ∈ AG (x, T ) and y > 0. To find an optimal portfolio, it is then enough to find
ψ ∈ AF (x, T ) and y > 0 such that we have equality in Equation (3.22). Now we claim that with
ψ ins defined by Equation (3.20) and taking y = 1/x > 0 we will have equality in Equation (3.22).
Indeed, Equation (3.17) yields
Z T
Z
1 T G
log VT (x, ψ ins ) = log x +
ψtins dM̃t +
(κt + αt )2 dhM it −
2
0
0
Z
1 T G
(κt + αt − ψtins )2 dhM it
−
2 0
Z T
Z
1 T G
G
= log x +
(κt + αt )dM̃t +
(κt + αt )2 dhM it
2 0
0
(3.23)
Moreover, because of Equation (3.8), Assumption 2.3.5, and the fact that ZTG > 0 P-a.s. so that
ins is in A (x, T ). Again by Equation (3.8), Doob’s inequality implies that
log− (ZTG ) we have
G
Rt ψ
Rt
R
both sup0≤t≤T 0 αs dMs and sup0≤t≤T 0 (αs + κG
αdM
s )dM̃s are integrable. Indeed: since
is a continuous local (P, F)-martingale we have by Doob’s inequality that
Z t
i2
h
h
Z t
2 i
hZ
E sup αs dMs ≤ E sup
αs dMs
≤ 4E
0≤t≤T
0≤t≤T
0
0
T
i
αt2 dhM it < ∞
0
R
R
hence αdM and αdM̃ are (P, F) and (P, G)-martingales on [0, T ] respectively. Then by the
definition of M̃ and κG , we therefore obtain
Z t
hZ t
i
hZ t
i
0=E
αs dMs −
αs dM̃s = E
αs κG
s dhM is ,
0
0
0
and hence taking Expectations in both sides of Equation (3.23) we get Equation (3.21):
h
i
hZ
ins
E log VT (x, ψ ) = log x + E
0
T
(κG
t
1
+ αt )dM̃t +
2
Z
T
2
(κG
t + αt ) dhM it
0
Z
i
1 h T G
= log x + E
(κt + αt )2 dhM it
2
0
Z T
Z T
h
1
2
2
= log x + E
αt dhM it +
(κG
t ) dhM it +
2
0
0
Z T
i
+2
(κG
t αt )dhM it
0
Z
i 1 hZ T
i
1 h T 2
2
= log x + E
αt dhM it + E
(κG
)
dhM
i
.
t
t
2
2
0
0
i
Enlargement of Filtration & Insider Trading
29
Remark 3.2.8. By the above theorem we observe that given the optimal portfolio the optimal
terminal wealths of the ordinary investor and the insider are given by
x
x
VT (x, φord ) = F and VT (x, ψ ins ) = G ,
Z T
Z T
respectively.
3.2.3
Insider’s additional expected logarithmic utility
In this subsection we will establish a relationship between the additional expected logarithmic
utility of an insider and the relative entropy of the probability measure P with respect to the
probability measure defined by the process 1/pG in Proposition 1.1.14. First we give the following
definitions.
Definition 3.2.9. Given a utility function u, the insider’s additional expected utility up to time
T is defined by
max E u VT (x, ψ) − max E u VT (x, φ) .
ψ∈AG (x,T )
φ∈AF (x,T )
Remark 3.2.10. As a particular case, for u = log we have the insider’s utility gain up to time
T is given by E[aT ] where
Z
1 T G 2
aT :=
(κt ) dhM it
(3.24)
2 0
Definition 3.2.11. Let P and Q two probabilities on (Ω, F), such that P Q. The relative
entropy of P with respect to Q on F is defined as
h
dP i
HF P|Q := EP log
dQ F
Remark 3.2.12. It is well-known that HF P|Q is always non negative and, equal to zero if
and only if P = Q on F, and increasing in F.
Proposition 3.2.13. The insider’s utility gain up to time T is given by
EP [aT ] = HGT P|P̃T ,
(3.25)
where the probability P̃ is given by Equation (1.6).
Proof. By taking log of both sides of Equation (2.17) we get for t = T
Z T
Z
1 T G 2
G
G
(κt ) dhM it
log pT =
κt dM̃t +
2 0
0
(3.26)
Thanks to Equation (3.8) and Assumption 2.3.5, Doob’s inequality implies that
Z
t G
sup
κs dMs 0≤t≤T
0
is integrable, hence κG dM̃ is an (P, G)-martingale on [0, T ], therefore taking expectation of
both sides of Equation (3.26) we get
i 1
h
hZ T
i
P
G
P
2
E log pT = E
)
dhM
i
.
(κG
t
t
2
0
R
From the definition of P̃T we have that pG
T = dP/dP̃T , whence we obtain our result.
Enlargement of Filtration & Insider Trading
3.2.4
30
Explicit calculations of the insider’s expected logarithmic utility
In this subsection we will calculate the insider’s expected terminal logarithmic utility in the
examples given Chapter 1.
Example 3.2.14. Example 1.3.2 revisited.
Recall that G = 1{WR ∈[a,b]} , and as observed we have Assumption 2.2.1 is satisfied for every
T < R and we have for t ∈ [0, T ]
P[G = 1|Ft ]
1 − P[G = 1|Ft ]
and p0t =
,
P[G = 1]
1 − P[G = 1]
√
√
and P[G = 1] = P[G = 1|F0 ] = Φ(b/ R) − Φ(a/ R), where Φ is the standard normal distribution function. In this example the random variable G is discrete. The entropy of G is defined
by
H(G) := −P[G = 0] log P[G = 0] − P[G = 1] log P[G = 1],
(3.27)
p1t =
and the conditional entropy of G given Ft , is defined by
h
i
H(G|Ft ) := −EP P[G = 0|Ft ] log P[G = 0|Ft ] + P[G = 1|Ft ] log P[G = 1|Ft ] ,
t ∈ [0, R]
(3.28)
Therefore conditioning on FT yields
h h
i
i
P
G
P
EP [aT ] = HGT P|P̃T = EP log pG
E
log
p
|F
=
E
T
T
T
h
i
= EP log(p0T )P[G = 0|FT ] + log(p1T )P[G = 1|FT ]
h
i
= EP P[G = 0|FT ] log P[G = 0|FT ] + P[G = 1|FT ] log P[G = 1|FT ]
− P[G = 0] log P[G = 0] − P[G = 1] log P[G = 1]
= H(G) − H(G|FT ).
(3.29)
Example 3.2.15. Example 1.3.1 revisited in a general setup.
Suppose that the insider’s information about the outcome of the Brownian motion W is possibly
distorted by some independent noise, that is he knows the value of
G := γWR + (1 − γ)ε,
where the noise ε is a random variable independent of FR and standard normally distributed,
and γ ∈ [0, 1]. For T < R, the conditional distribution of G given FT is then normal distributed
with mean mT = γWT and variance σT = γ 2 (R−T )+(1−γ)2 . Indeed repeating the calculations
of Example 1.3.3 with γWR + (1 − γ)ε instead of WR + ε we have that
P[G ∈ dl|FT ] = P[(γWR − γWT + γWT + (1 − γ)ε) ∈ dl|FT ]
= P[(γ(WR − WT ) + (1 − γ)ε) ∈ (dl − y)]|y=γWT
=p
1
2π(γ 2 (R − T ) + (1 − γ)2 )
= plT P[(γWT + (1 − γ)ε) ∈ dl]
= plT P[G ∈ dl],
exp −
(l − γWT )2
dl
2
− T ) + (1 − γ) )
2(γ 2 (R
Enlargement of Filtration & Insider Trading
31
where
s
plT =
(l − γWT )2
γ 2 R + (1 − γ)2
exp
−
+
γ 2 (R − T ) + (1 − γ)2
2(γ 2 (R − T ) + (1 − γ)2 )
l2
+
, l ∈ R.
2(γ 2 R + (1 − γ)2 )
Now applying Proposition 3.25 we obtain
h
i
EP [aT ] = HGT P|P̃T = EP log pG
T
s
h
γ 2 R + (1 − γ)2
(G − γWT )2
exp
−
= EP log
+
2
2
2
γ (R − T ) + (1 − γ)
2(γ (R − T ) + (1 − γ)2 )
i
G2
+
2(γ 2 R + (1 − γ)2 )
s
h
(G − γWT )2
γ 2 R + (1 − γ)2
−
+
= EP log
2
2
2
γ (R − T ) + (1 − γ)
2(γ (R − T ) + (1 − γ)2 )
i
G2
+
2(γ 2 R + (1 − γ)2 )
h
2i
γ2T
1
P (γ(WR − WT ) + (1 − γ)ε)
−
E
+
= log 1 + 2
2
γ (R − T ) + (1 − γ)2
2(γ 2 (R − T ) + (1 − γ)2 )
h (γW + (1 − γ)ε)2 i
R
+ EP
2(γ 2 R + (1 − γ)2 )
γ2T
1
.
(3.30)
EP [aT ] = log 1 + 2
2
γ (R − T ) + (1 − γ)2
If γ = 1 then we have Example 1.3.1, and we have
T
1
,
EP [aT ] = log 1 +
2
(R − T )
and if we let T % R we have that EP [aT ] = ∞. Intuitively, this is true. Because the more the
insider knows about the outcomes of the random variable G, the bigger his expected utility gain
becomes.
Example 3.2.16. Example 1.3.3 revisited.
Recall that we suppose that the insider’s information about the outcome of the Brownian motion
W is distorted by an independent noise, that is he knows the value of
G := WR + ε,
where the noise ε is a random variable independent of FR and standard normally distributed.
For T ≤ R, the conditional distribution of G given FT is then normal distributed with mean
mT = WT and variance σT = (R − T ) + 1 and is equivalent to the law of G and their density
process is given by the calculations of Example 1.3.3, that is
r
R+1
(l − WT )2
l2
l
pT =
exp −
+
, l ∈ R.
R−T +1
2(R − T + 1) 2(R + 1)
Enlargement of Filtration & Insider Trading
32
Now applying Proposition 3.25 we obtain
h
i
EP [aT ] = HGT P|P̃T = EP log pG
T
h
r R + 1
(G − WT )2
G2 i
= EP log
exp −
+
R−T +1
2(R − T + 1) 2(R + 1)
r R + 1 h
(G − WT )2
G2 i
= log
+ EP −
+
R−T +1
2(R − T + 1) 2(R + 1)
h
h (W + ε)2 i
1
T
(WR − WT + ε)2 i
R
= log 1 +
+ EP −
+ EP
2
R−T +1
2(R − T + 1)
2(R + 1)
1
T
EP [aT ] = log 1 +
.
(3.31)
2
2(R − T + 1)
If we let T % R we have that the utility gain of the insider up to time R is
EP [aR ] =
R
1
log 1 +
.
2
2
Chapter 4
Enlargement of filtrations and
Girsanov’s theorem
In the previous first two chapters we have treated the case of initial enlargement of filtrations
by the information represented by a random variable G. In this chapter we will reconsider the
same problem but in a more general perspective, from a different angle.
4.1
Preliminaries and notations
Let (Ω, F, P) be a probability space with right continuous filtrations F = (Ft )t≥0 and K =
W
W
(Kt )t≥0 . Moreover, let F∞ = t≥0 Ft and K∞ = t≥0 Kt . The objective is to study the
enlarged filtration G defined by
Gt := ∩s>t (Fs ∨ Ks ),
t ≥ 0.
We relate this enlargement to a measure change on the product space Ω̄ := Ω × Ω equipped with
the σ-algebra F̄ := F∞ ⊗ K∞ . We endow Ω̄ with the filtration F̄ = (F̄t )t≥0 where
F̄t := ∩s≥t (Fs ⊗ Ks ),
t ≥ 0.
We embed Ω into Ω̄ by the following map
ϕ : (Ω, F) → (Ω̄, F̄),
ω 7→ (ω, ω).
Moreover, we define a probability measure on the measurable space (Ω̄, F̄) denoted by P̄ as the
image of the probability measure P under ϕ, i.e. P̄ := Pϕ . Hence for all F̄-measurable functions
f¯ : Ω̄ → R we have
Z
Z
0
0
¯
f (ω, ω )dP̄(ω, ω ) = f¯(ω, ω)dP(ω).
(4.1)
At last we define the following probability measure Q̄ on the measurable space (Ω̄, F̄) by
Q̄ = P|F∞ ⊗ P|K∞ ,
as the product measure of the restrictions of P to F∞ and K∞ respectively.
Since we are dealing with different probability measures, and most of the results only hold for
33
Enlargement of Filtration & Insider Trading
34
completed filtrations, we use the following notation to specify to which probability measure
completion is taken. Let (Ct )t≥0 be a filtration and R a probability measure. We denote by
(CtR )t≥0 the filtration (Ct )t≥0 completed by the R-negligible sets.
We first start with a simple observation.
Lemma 4.1.1. If f¯ : Ω̄ → R is F̄tP̄ -measurable, then the map f¯ ◦ ϕ is GtP -measurable.
Proof. First, observe that
Gt = ∩s>t σ(A ∩ B : A ∈ Fs , B ∈ Ks )
= ∩s>t σ(ϕ−1 (A × B) : A ∈ Fs , B ∈ Ks )
= ϕ−1 ∩s>t σ(A × B : A ∈ Fs , B ∈ Ks )
= ϕ−1 ∩s≥t (Fs ⊗ Ks ) = ϕ−1 (F̄t ).
Let f¯ = 1A with A ∈ F̄tP̄ . Then there is a set B ∈ F̄t such that P̄(A M B) = 0. From the
first part we deduce that the map 1B ◦ ϕ is Gt -measurable. And since we have P-almost surly
1A ◦ ϕ = 1B ◦ ϕ, the map 1A ◦ ϕ is GtP -measurable. Define S by
S := {ḡ : ḡ is bounded F̄tP̄ -measurable and ḡ ◦ ϕ is GtP -measurable}
S is a vector space and we have 1 ∈ S. Now let (h̄n )n be a sequence of function in S such that
0 ≤ h̄n ↑ h̄ and h̄ is bounded. Therefore h̄ is F̄tP̄ -measurable and we have that 0 ≤ h̄n ◦ ϕ ↑ h̄ ◦ ϕ
hence h̄ ◦ ϕ is GtP -measurable, whence h̄ ∈ S. Since S contains the indicator functions, then the
statement is true for arbitrary F̄tP̄ -measurable functions.
Lemma 4.1.2. If X̄ is F̄P̄ -predictable, then X̄ ◦ ϕ is GP -predictable.
Proof. Let 0 < s ≤ t, A ∈ F̄sP̄ and ϑ̄ = 1A 1]s,t] , then by Lemma 4.1.1, ϑ̄ ◦ ϕ = (1A ◦ ϕ)1]s,t] is
GP -predictable. Then the proof is completed by a monotone class argument.
Lemma 4.1.3. Let X̄ be an F̄P̄ -adapted process, then X = X̄ ◦ ϕ is an GP -adapted process.
Moreover, if X̄ is a local F̄P̄ -martingale, then X is a local GP -martingale.
Proof. The first part of the Lemma concerning adaptation is a consequence of Lemma 4.1.1. For
the second part, let X̄ be an F̄P̄ -martingale, and let 0 ≤ s < t and A ∈ Gs . Then there exists a
set B ∈ F̄s such that A = ϕ−1 (B) and hence
EP [1A (Xt − Xs )] = EP̄ [1A (X̄t − X̄s )] = 0.
Whence X is a GP -martingale.
Now for the case of a local martingales it is enough to show that if (T̄n )n≥0 is a localizing sequence
of F̄tP̄ -stopping times, then Tn = T̄n ◦ ϕ for all n ≥ 0 is a localizing sequence of GtP -stopping
times. Indeed, this is the case since
{Tn ≤ t} = ϕ−1 ({T̄n ≤ t}) ∈ ϕ−1 (F̄tP̄ ) ⊂ GtP .
Enlargement of Filtration & Insider Trading
35
Theorem 4.1.4. Let S̄ be a càdlàg (P̄, F̄P̄ )-semimartingale, then S := S̄ ◦ϕ is a càdlàg (P, GP )semimartingale.
Proof. Let S̄ be a F̄P̄ -semimartingale and S = S̄ ◦ ϕ. The process S has càdlàg paths P-a.s.
and by Lemma 4.1.3 we have that S is GP -adapted. By the Theorem of Bichteler-DellacherieMokobodsky, see Theorem A.3.1 in the Appendix, it is sufficient to show that if (θn ) is a sequence
of simple G-adapted integrands converging uniformly to 0, then the sequence of simple integrals
(θn · S) converges to 0 in probability relative to P. We recall that any G-simple integrand is of
P
the form 1≤i≤n 1]ti ,ti+1 ] θi , with θi is Gti -measurable. Since Gt = ϕ−1 (F̄t ) we can find simple
(F̄)-adapted processes (θ̄n ) converging uniformly to zero such that θ̄n ◦ ϕ = θn . Since S̄ is a
semimartingale this implies that the sequence (θ̄n · S̄) converges to zero in probability relative
to P̄, and hence (θn · S) converges to 0 in probability relative to P.
Until now we have seen how objects can be translated from Ω̄ to Ω. Now we look at the
reverse transfer.
Lemma 4.1.5. Let M be a right-continuous local (P, FP
t )-martingale, then the process defined
Q̄
0
by M̄ (ω, ω ) := M (ω) is a right continuous local (Q̄, F̄t )-martingale.
Proof. That M̄ is F̄Q̄ -adapted is obvious. For A ∈ Fs , B ∈ Ks we have
EQ̄ [1A (ω)1B (ω 0 )(M̄t − M̄s )] = P(B)EP̄ [1A (Mt − Ms )] = 0.
Then by the monotone class theorem, for all bounded Fs ⊗ Ks -measurable functions θ we have
EQ̄ [θ(M̄t − M̄s )] = 0. Since M̄ is right-continuous, this remains true for all bounded ∩u>s (Fu ⊗
Ku )-measurable θ, whence M̄ is a (Q̄, F̄tQ̄ )-martingale.
The stopping times can be extended to the product space via the transformation T̄ (ω, ω 0 ) =
T (ω). Therefore we get That
T̄ (ω,ω 0 )
M̄t
T̄ (ω,ω 0 )
(ω, ω 0 ) = Mt
T (ω)
(ω) = Mt
(ω)
whence the local martingale property translates to the product space Ω̄ with respect to Q̄.
4.2
Girsanov-type for change of filtrations
In the rest of this chapter we will assume the following
Assumption 4.2.1. The probability measure P̄ is absolutely continuous with respect to the
measure Q̄ on F̄, i.e. P̄ Q̄.
Now let M be a local (P, FP )-martingale and M̄ its extension to Ω̄ as given in Lemma 4.1.5.
By Assumption 4.2.1, M̄ is a (P̄, F̄P )-semimartingale and hence, by theorem 4.1.4, M is a
(P, GP )-semimartingale. Thus, l’hypothèse (H’) is satisfied. But what does the local martingale
part of M relative to (P, GP ) look a like?
The change of filtrations corresponds to changing the measure from Q̄ to P̄ on the product
space Ω̄. Then using Assumption 4.2.1 Girsanov’s theorem applies on Ω̄. As a consequence we
Enlargement of Filtration & Insider Trading
36
obtain a Girsanov-type result for the corresponding change of filtrations. To get this, we first
introduce the density process. Let Z̄ = (Z̄t )t≥0 denote a càdlàg F̄Q̄ -adapted process with
Z̄t :=
dP̄ ,
dQ̄ F̄tQ̄
t ≥ 0.
Note that we consider the completed filtration in order to insure the existence of a càdlàg density
process. Then Theorem 4.1.4 implies that the process Z defined by Z = Z̄ ◦ ϕ is a (P, GP )semimartingale. Before giving the Girsanov-type results, we study the behavior of quadratic
variation processes under the projection ϕ.
Lemma 4.2.2. Let X̄ and Ȳ be continuous (P̄, F̄P̄ )-semimartingales. If X = X̄ ◦ ϕ and Y =
Ȳ ◦ ϕ, then
hX̄, Ȳ i ◦ ϕ = hX, Y i
up to indistinguishability relative to P.
Proof. Set X = X̄ ◦ ϕ and Y = Ȳ ◦ ϕ. Let t > 0 and tni = t 2in for i = 0, 1, . . . , 2n . We know that
the sums
2n
X
X̄0 Ȳ0 +
(X̄tni+1 − X̄tni )(Ȳtni+1 − Ȳtni )
i=0
converges to hX̄, Ȳ it in probability relative to P̄. Hence hX̄, Ȳ it ◦ ϕ is the limit in probability
of the sums
2n
X
X0 Y0 +
(Xtni+1 − Xtni )(Ytni+1 − Ytni )
i=0
relative to P. But the limit is also equal to hX, Y it , and hence we have
hX̄, Ȳ it ◦ ϕ = hX, Y it
Since both processes are continuous, they coincide up indistinguishability relative to P.
Let M̄ be a continuous (Q̄, F̄Q̄ )-semimartingale and M = M̄ ◦ ϕ. By Assumption 4.2.1,
M̄ is also a continuous (P̄, F̄P )-semimartingale. Moreover, the process hM̄ , Z̄i relative to Q̄ is
P̄-indistinguishable from the one relative to P̄. Similarly, Lemma 4.2.2 implies that the process
hM, Zi coincides with hM̄ , Z̄i ◦ ϕ.
Theorem 4.2.3. If M is a continuous local (P, FP )-martingale with M0 = 0, then
M−
1
· hM, Zi
Z
(4.2)
is a continuous local (P, GP )-martingale.
Proof. Let M be a continuous local (P, FP )-martingale with M0 = 0. Then by Lemma 4.1.5, the
process defined by M̄ (ω, ω 0 ) = M (ω) is a continuous local (Q̄, F̄Q̄ )-martingale. Then Girsanov’s
theorem yields that the process
1
M̄ − · hM̄ , Z̄i
Z̄
Enlargement of Filtration & Insider Trading
37
is a continuous local (P̄, F̄P̄ )-martingale. By Lemma 4.2.2, the process hM̄ , Z̄i ◦ ϕ is Pindistinguishable from the process hM, Zi, whence we have
(
1
1
· hM̄ , Z̄i) ◦ ϕ = · hM, Zi
Z
Z̄
up to P-indistinguishability. Finally with Lemma 4.1.3 we have that
M−
1
· hM, Zi
Z
(4.3)
is a continuous local (P, GP )-martingale.
4.3
Conclusion
In the preceding chapters the filtration is supposed to be enlarged by some random variable G
taking values in a Polish space (U, U). As a consequence, for every t ∈ [0, T ] regular conditional
distributions of G given Ft exist. The development in chapter 1 is based on the paper of Jacod [9]
who does not use Girsanov’s theorem, but he assumed the condition (A’) to be satisfied. And he
pointed out that his result could be obtained by applying Girsanov’s theorem to the conditional
measures Pl = P[·|G = l], l ∈ U . Condition (A’) implies that the conditional measures Pl are
absolutely continuous with respect to P. Hence, by Girsanov, for a given local (P, Ft )-martingale
M there is a process Al such that M − Al is a local (P, Ft )-martingale. By combining the
processes Al we obtain that M − AG is a local (P, Gt )-martingale. But combining the processes
Al in a meaningful way has never worked out rigorously.
In the approach of Ankirchner [4], every local martingale is embedded into the product space
Ω̄. He applies Girsanov’s theorem on the product space and then translate the results back into
the original space. One of the advantages is that there is no need to assume that the regular
conditional distributions do exist, and there is also no need to show how the processes can be
combined. Instead he shows how one can translate objects from Ω to Ω̄ and vice versa. Moreover
there is no restriction to the initial enlargement, also dynamic enlargements of the kind:
Gt = ∩s>t (Fs ∨ Ks ), t ≥ 0
can be treated.
Appendix A
Some Important Theorems and
Lemma’s
In this appendix we give the most important theorems in abstract probability theory and the
theory of stochastic processes that we have used in the proves of some results.
A.1
Appendix Chapter 1
The following theorem and corollary assure the existence of the regular conditional probabilities.
Theorem A.1.1 (Shiryaev [15], Theorem 5).
Let X be a random variable in (Ω, A, P) with values in a Borel space (E, E). Then there is a
regular conditional probability of X with respect to mathcalA.
Corollary A.1.2 (Shiryaev [15], Corollary that follows Theorem 5).
Let X be a random variable in (Ω, A, P) with values in a complete separable metric space (E, E).
Then there is a regular conditional distribution of X with respect to A. In particular such a
distribution exists for the spaces (Rn , B(Rn )) and (R∞ , B(R∞ )).
Theorem A.1.3 (Théorème de Doob, Dellacherie [5]).
Let x 7→ Px be a Markovian Kernel from (X, X ) into (Y, Y) and let x 7→ Qx be a bounded kernel
from (X, X ) into (Y, Y), such that Qx is absolutely continuous with respect to Px for all x ∈ X.
Then there exists a non-negative X ⊗ Y-measurable function g, such that
Q(x, dy) = g(x, y)P(x, dy), for every x ∈ X.
furthermore, we can assume that g(x, ·) = g(ξ, ·) for every couple (x, ξ) of elements of X such
that Qx = Qξ and Qx = Qξ .
Theorem A.1.4 (Williams [16], Monotone Class Theorem for Functions).
Let H be a class of bounded functions from Ω into R satisfying the following conditions:
(i) H is a vector space over R;
(ii) the constant function 1 is an element of H;
38
Enlargement of Filtration & Insider Trading
39
(iii) if (fn )n is a sequence of non negative functions in H such that fn ↑ f where f is a bounded
function on Ω, Then f ∈ H.
Then if H contains the indicator function of every set in some π-system I, then H contains
every bounded σ-measurable function on S.
A.2
Appendix Chapter 2
Theorem A.2.1 (Jacod [8], Thérème 7).
Let Y be a semimartingale relative to F and G. REvery F-predictable process ϑ ∈ Lsm (Y, QG , F)
is in Lsm (Y, QG , G), and the integral stochastic ϑdY relative to F is the same relative to G.
Corollary A.2.2 (Jacod [7], Corollaire 9.21).
Let F and G be two filtrations on the probability space (Ω, A, P) such that F is a subfiltration of
G and let X ∈ Mloc (G) ∩ Mloc (F) and q ∈ [1, ∞). Then
Lqloc (X, G) = P(G) ∩ Lqloc (X, F).
R
R
If H ∈ RP(G) ∩ Lqloc (X, F), there exists then a common version HdX of the processes F- HdX
and G- HdX.
Theorem A.2.3 (Malliavin [11], Theorem 3.5.1).
Let G and F be two sub-σ-algebras of the probability space (Ω, A, P) and let H denote the σalgebra they generate. Let V be the vector subspace of L∞ (A) defined by
n
n
o
X
∞
V = h ∈ L (A) : h =
fi gi , with fi ∈ L∞ (F), gi ∈ L∞ (G) .
i=1
Then V ⊂ L2 (H) and V is dense in L2 (H).
Theorem A.2.4 (He, Wang and Yan [6], Theorem 10.5).
The collection of all bounded martingales denoted by M∞ is dense in H1 .
Theorem A.2.5 (He, Wang and Yan [6], Theorem 13.4).
Assume M ∈ Mloc,0 . Then the following statements are equivalent:
(1) L(M ) = Mloc,0 , i.e., M has the strong property of predictable representation,
(2) L1 (M ) = M10 ,
(3) M∞
0 ⊂ L(M ).
A.3
Appendix Chapter 4
(Ω, F, F, P) is a filtered probability space. Let H be the vector space of bounded predictable
processes H of the form
n−1
X
H=
hi 1]ti ,ti+1 ] ,
i=1
Enlargement of Filtration & Insider Trading
40
where 0 < t1 < t2 < . . . < tn and hi is bounded Fti -measurable.
If X is a real process, we associate to every H ∈ H the process:
J(X, H)t =
n−1
X
hi (Xt∧ti+1 − Xt∧ti ).
i=1
The following theorem is the characterization of semimartingales:
Theorem A.3.1 (Jacod [7], Theorem 9.3, p.279).
An F-adapted, càdlàg process X is a semimartingale, if and only if for every sequence (H n )n≥0
of elements of H converging uniformly to H ∈ H and for every t ≥ 0, the variables J(X, H n )t
converges in probability to J(X, H)t .
Index
(Ω, F, P)
F
G
(U, U)
G
S
H, H0 , Ĥ
Ω̂
O(H0 )
P(H0 )
pl
pG , 1/pG
P̃
M(R, H)
Hp (R, H)
H∞ (R, H)
Lp (M, R, H)
Lsm (S, R, H)
2 (QF , F)
Hloc
QF
ZF
ZG
QG
V (x, π)
AH (x, T )
HF P|Q
1
1
1
1
1
1
2
2
2
3
4
4
4
10
10
10
10
10
10
10
10
11
11
25
25
29
41
Bibliography
[1] Amendinger, J., (2000): Martingale representation theorems for initially enlarged filtrations. Stochastic Processes and their Applications 89, 101-116.
[2] Amendinger, J., Becherer, D., Schweizer, M., (2003): A monetary value for initial information in portfolio optimization. Finance and Stochastics 7, 29-46.
[3] Amendinger, J., Imkeller, P., Schweizer, M., (1998): Additional logarithmic utility of an
insider. Stochastic Processes and their Applications 75, 263-286.
[4] Ankirchner, S., Dereich, S., Imkeller, P., (2004): Enlargement of filtrations and continuous
Girsanov-type embeddings.
[5] Dellacherie, C., (1979/1980): Sur les noyeaux σ-finis. Séminaire de Probabilités XV, Lectures Notes in Mathematics. Vol. 850. Springer-Verlag. pp. 371-387.
[6] He, S-w., Wang, J-g., Yan, J-a., (1992): Semimartingale Theory and Stochastic Calculus.
CRC Press INC.
[7] Jacod, J., (1979): Calcul Stochastique et Problemes de Martingales. Lecture Notes in Mathematics, 714. Springer-Verlag.
[8] Jacod, J., (1980): Sur les Integrales Stochastiques par Rapport à une Semimartingale
Vectorielle et Changement de Filtration. Séminaire de Probabilités XIV, Lectures Notes in
Mathematics. Vol. 75. Springer-Verlag. pp. 263-286.
[9] Jacod, J., (1985): Grossissement Initial, Hypothèse (H’) et Théorème de Girsanov. In:
Jeulin, Th., Yor, M. (eds.), Grossissement de filtration: exemples et applications, Lecture
Notes in Mathematics, 1118. Springer-Verlag.
[10] Jeulin, Th., (1980): Semi-Martingales et Grossissement d’une Filtration. Lecture Notes in
Mathematics, 833. Springer-Verlag.
[11] Malliavin, P., (1995): Integration and Probability. Springer-Verlag.
[12] Meyer, P.-A., (1975): Probabilités et Potentiel. Chapitres I à IV. Hermann.
[13] Pikovsky, I., Karatzas, I., (1996): Anticipative portfolio optimization. Advances in Applied
Probability 28, 1095-1122.
42
Enlargement of Filtration & Insider Trading
43
[14] Protter, P., (1990): Stochastic Integration and Differential Equations, A New Approach.
Springer-Verlag.
[15] Shiryaev, A.N., (1995): Probability. second Edition Springer.
[16] Williams, D., (1991): Probability with Martingales. Cambredge University Press.