On some connections between probability and differential

On some connections between probability and
differential equations
Rafael Granero Belinchón
[email protected]
Master Thesis in Partial Differential Equations- Random and
Deterministic Modelling.
Advisor: Mr. Jesús Garcı́a Azorero
Contents
Introduction
5
1 First results
1.1 The brownian motion . . . . . . . . . . . . .
1.2 Existence and uniqueness theorem for SDE .
1.3 The Wiener measure . . . . . . . . . . . . . .
1.4 Markov process and semigroups of operators .
2 Elliptic equations
2.1 The laplacian . . . . . . . . . . . . . . .
2.2 Poisson equation and shape recognition
2.3 A general elliptic equation . . . . . . . .
2.4 Unbounded domains . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
7
7
13
19
23
.
.
.
.
29
30
35
37
39
3 Parabolic equations
41
3.1 A general parabolic equation . . . . . . . . . . . . . . . . . . 41
3.2 The Fisher equation . . . . . . . . . . . . . . . . . . . . . . . 46
3.3 Feynman and quantum mechanics . . . . . . . . . . . . . . . 48
4 Fluid dynamics
4.1 The 1-dimensional Burgers equation . . . .
4.2 The d−dimensional Burgers equations . . .
4.3 The incompressible Navier-Stokes equations
4.4 Proof of local existence for Navier-Stokes .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
5 Differential games and equations
5.1 The operators . . . . . . . . . . . . . . . . . . . . . .
5.2 The games . . . . . . . . . . . . . . . . . . . . . . . .
5.2.1 ’Tug of war’ . . . . . . . . . . . . . . . . . .
5.2.2 Approximations by SDE to ∆∞ . . . . . . . .
5.2.3 Existence of game’s value for the ’Tug of war’
5.2.4 ’Tug of war with noise’ . . . . . . . . . . . .
5.2.5 Spencer game . . . . . . . . . . . . . . . . . .
5.2.6 Other games . . . . . . . . . . . . . . . . . .
2
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
56
57
59
64
68
.
.
.
.
.
.
.
.
71
71
73
73
75
76
79
80
81
3
CONTENTS
6 Numerical experiments
82
7 Conclusion
85
A Some useful results
A.1 A construction for the brownian motion
A.2 The Kolmogorov’s regularity theorem .
A.3 The Itô formula . . . . . . . . . . . . . .
A.4 Existence and uniqueness for PDE . . .
87
87
88
91
95
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
B Itô integral
C Matlab code
C.1 Brownian motion paths . . . . . . . . . . . .
C.2 Brownian bridge paths . . . . . . . . . . . . .
C.3 Euler method for a SDE . . . . . . . . . . . .
C.3.1 1D case . . . . . . . . . . . . . . . . .
C.3.2 2D case . . . . . . . . . . . . . . . . .
C.4 Monte-Carlo method for the laplacian . . . .
C.5 Silhouette recognition . . . . . . . . . . . . .
C.6 Monte-Carlo method for parabolic equations .
C.7 Code to approximate the ∞−laplacian . . . .
96
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
100
100
100
101
101
101
101
103
105
107
List of Figures
1.1
1.2
1.3
1.4
1.5
1.6
Two brownian paths. . . . . . . . . . . . . . . .
A brownian path in the plane. . . . . . . . . . .
Some solution of the Langevin equation paths.
A solution of the 2D-Langevin equation path. .
The cylinders. . . . . . . . . . . . . . . . . . . .
Paths of a brownian bridge. . . . . . . . . . . .
.
.
.
.
.
.
9
10
13
14
20
24
2.1
2.2
Numerical experiment, silhouette and u. . . . . . . . . . . . .
Results, Φ in the upper figure, Ψ in the lowe figure. . . . . . .
36
38
3.1
Traveling wave solution of (3.3). . . . . . . . . . . . . . . . .
46
4.1
4.2
4.3
4.4
4.5
Navier-Stokes solution at time 10. . . . . . . . .
Stokes problem solution. . . . . . . . . . . . . . .
Inviscid Burgers equation at different times. . . .
Burgers equation with different dissipation rates.
Flow. . . . . . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
56
57
58
59
61
5.1
5.2
An ∞−harmonic function. . . . . . . . . . . . . . . . . . . . .
The posible positions for the ’Tug of war with noise’ game. .
72
80
6.1
6.2
6.3
The numerical solution. . . . . . . . . . . . . . . . . . . . . .
Initial value. . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Numerical solution at time 4. . . . . . . . . . . . . . . . . . .
83
84
84
4
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Introduction
It may appear that the partial differential equations and probability theory
are very diverse fields of study. But when we study these fields in a deeper
way we find multiple connections between them (representation formulas,
new numerical methods...). We are going to display some of these relationships. We will mainly study them from a partial differential equations point
of view. However, at the same time, we will state the probabilistic results,
giving general ideas of the proofs and references for the technical details.
This approach is going to permit us to resolve some problems in an easier
way, or, at least, in a different way. In addition, from a numerical analysis
point of view, it is useful because it will give us the opportunity to utilize the
Monte-Carlo method to approach the solution of a PDE. Other applications
derived from this calculations have been the functional integration, a key
point in quantum theory, and a new method (we will talk about it later) for
silhouette recognition ([GGSBB]).
This text is formed by two parts. In the first part we will obtain the representation formulas as integrals in a certain functional space for the solution
to diverse PDE’s. We will give another proof for the local (in time) existence
of classical solution for Navier-Stokes equations in 3D. For this topic, we will
follow closely the work of G. Iyer and P. Constantin contained in the Ph.D.
thesis of the former and some papers of both of them ([CI],[Iy],[Iy2], [C]).
We will also study some equation from the quantum mechanics, in particular we will go in depth into Feynman’s formulation of quantum mechanics
([FH],[Fe],[Fe2],[GJ],[S],[Z]). This method of considering the Itô diffusions
can be understood as the method of characteristics, but random ones (see
chapter 4).
After we will take into account problems related to the infinity laplacian,
with an approximation based on the game theory. For this we follow the
work of Y.Peres, S.Sheffield, D.Wilson and O.Schramm ([PSSW]).
Finally, the second part is dedicated to the appendix which contains
technical results that complement the other chapters.
We do not want to finish this introduction without explicitly mentioning
the great mathematicians and physicist responsible for the development of
this theory, like A. Einstein for his papers about th brownian motion ([E]),
R. Feynman ([Fe],[Fe2] ,[FH]) and P. Dirac for thinking into ’integrate on
5
6
INTRODUCTION
functions’ and for giving form to the third formulation of quantum theory. The rigour to these calculations was given by N. Wiener and M. Kac
([K],[K2]). The last author has a paper which inspires the title of this text.
We also have to mention K. Itô, who gave us a result with his name, and
S. Ulam, responsible for the Monte-Carlo method. It will be pleasent that
this text will serve as humble homage. It is a courious fact to observe how
small is the quantity of squared kilometers where these ideas were developed:
Wiener, Feynman and Ulam they knew each other from Los Álamos, where
they helped develop the atomic bomb. Kac was Feynman’s companion at
the Cornell University.
We also want to aknowledge various people for their contribution, specially to Mr. Jesús Garcı́a Azorero (Universidad Autónoma de Madrid), for
his effort and care. Also to Mr. Rafael Orive Illera (Universidad Autónoma
de Madrid), for his review of the draft version of this text, to Mr. Massimiliano Gubinelli (Université Paris-Dauphine) for his interesting explanations,
to Mr. Bela Farago (Institut Laue-Langevin) for his hospitality and to Mr.
Julio D. Rossi (Universidad de Buenos Aires) and to Mr. Fernando Charro
(Universidad Autónoma de Madrid) for their explanations, very useful for
the fifth chapter. Less formal but equally useful was the help that Ms. Eva
Martı́nez Garcı́a, Mr. David Paredes Barato and Mr. Jesús Rabadán Toledo
gave me.
Chapter 1
First results
In this chapter we give the results and definitions we will use after. We
give some properties of brownian motion’s path and stochastic differential
equations. In the appendix A there is a construction of brownian motion
(for more properties of this object see [Du]). In this chapter we define and
construct the Wiener measure and we study the relation between semigroups
of operators and Markov processes.
1.1
The brownian motion
The brownian motion is the random motion we can observe in certain microscopic particles in a fluid (for example, pollen suspended in water). Robert
Brown observed this highly irregular motion in 1827. The motion of this
particles is because their surface is randomly hitted with random force by
the fluid molecules. The diffusion is a phenomenon based in the brownian
motion.
The first person who described mathematically the brownian motion
was Thorvald N. Thiele in 1880, in a paper about the least squares. Louis
Bachelier in his Ph.D. thesis in 1900 give a stochastic approximation to
market’s fluctuations. However it was Einstein who, in 1905, studied this
phenomenon and rederived and extended the previous results.
In those years the atomic and molecular nature of matter was a controverted idea. Einstein and Marian Smoluchowski proved that if the cinetic
fluid theory was correct then the water molecules would have random motions.
Let a 2-dimensional grid (for space and time) {(ndx, mdt), m, n ∈ Z}
with increments dx and dt. Let a particle which starts in time 0 in x = 0.
The probability of a movement towards the right for this particle is 1/2.
The probability of a movement towards the left is the same. Our particle
automatically will move up (the vertical axis is for the time). As we said
before, our model is a model of the position of a particle with random motion
7
8
CHAPTER 1. FIRST RESULTS
caused by the random hits.
Let p(n, m) be the probability, for this particle, of being in the position
ndx in time mdt.
Using conditional probabilities, we have
1
p(n, m + 1) = (p(n − 1, m) + p(n + 1, m));
2
thus,
1
p(n, m + 1) − p(n, m) = (p(n − 1, m) − 2p(n, m) + p(n + 1, m))
2
If we suppose
dx2
=D>0
dt
(1.1)
we can write
p(n, m + 1) − p(n, m)
D (p(n − 1, m) − 2p(n, m) + p(n + 1, m))
=
dt
2
dx2
The quotient condition we suppose in (1.1) is needed to obtain a parabolic
equation, if we consider a different condition the resulting limit would not
make any sense.
Formally, assuming the limits that we take exist; doing dx, dt → 0 with
(1.1) and writing ndx = x, mdt = t, our discrete probability converges to a
density.
p(n, m) → f (x, t)
And we obtain that the density verifies the heat equation with diffusion
parameter D/2.
∂t f (x, t) =
D
∆f (x, t), f (x, 0) = δ0 (x)
2
(1.2)
The hypothesis (1.1) is a key point and gives us the diffusion equation,
as we expect because the model we consider.
These calculations are formal. Indeed, the previous limit is not rigorous.
However we can rigorize using the central limit theorem. This theorem
shows us that the density has a normal distribution N (0, Dt). All these
calculations are justified in [Ev]. Einstein in [E] studied this problem. Our
formal arguments show us that there is a relationship between probability
and PDE’s.
We give a definition and some results about the brownian motion.
Definition 1. Let (Ω, B, P ) a probability space, we say that a process W (ω, t)1 ,
W : Ω × [0, T ] → R is a brownian motion if the following conditions hold
~ (t). However, to remark
We will use the following notation for the brownian motion W
the idea of the brownian motion like a random variable taking values in a functional space,
we will write W (ω) ∈ C([0, T ]) o ω(t).
1
9
1.1. THE BROWNIAN MOTION
1. W (ω, 0) = 0 y t 7→ W (ω, t) is continuous a.e
2. W (ω, t) − W (ω, s) ∼ N (0, t − s) ∀t ≥ s > 0
3. We have independent increments.
2
1.5
1
0.5
0
−0.5
−1
−1.5
0
200
400
600
800
1000
1200
Figure 1.1: Two brownian paths.
Let t1 , t2 ..., tn times and B1 , ...Bn intervals. We can calculate the probability that a brownian path takes values in Bi at time ti . Indeed, let
−|x − y|2
1
exp
p(t, x, y) = √
2t
2πt
P (a1 < W (t1 ) < b1 , ...an < W (tn ) < bn ) =
Z
...
B1
Z
Bn
p(t1 , 0, x1 )p(t2 − t1 , x1 , x2 )...p(tn − tn−1 , xn−1 , xn )dxn ...dx1 (1.3)
This calculation is a key point for the Wiener measure.
Using a standard argument, if we have the formula for step functions,
we can generalize the formula by approximation. We obtain
E[f (W (t1 ), ..., W (tn ))] =
Z
Rn
f (x1 , ..., xn )p(t1 , 0, x1 )p(t2 − t1 , x1 , x2 )...p(tn − tn−1 , xn−1 , xn )dxn ...dx1
(1.4)
Remark 1 We will see that (1.4) can be understood as an ’integral in
functions’. Indeed, if we fix T we can see the brownian motion as W (ω) :
10
CHAPTER 1. FIRST RESULTS
Ω 7→ C([0, T ]), and then f will be a function with another function as
argument.
Using the definition we conclude
E[W (t)] = 0,
E[W 2 (t)] = t
We can calculate the covariance in a similar way. If s < t then
E[W (t)W (s)] = E[(W (s)+W (t)−W (s))W (s)] = s+E[(W (t)−W (s))W (s)] = s
2
1.5
1
0.5
0
−0.5
−1
−1.6
−1.4
−1.2
−1
−0.8
−0.6
−0.4
−0.2
0
0.2
0.4
Figure 1.2: A brownian path in the plane.
Thinking in the applications to differential equations, we are interested in
the path properties. We give a theorem of Kolmogorov about the regularity
of the paths.
Theorem 1 (Kolmogorov). Let X be a stochastic process with continuous
path a.e. and such that
E[|X(t) − X(s)|β ] ≤ C(t − s)1+α , ∀t, s ≥ 0
then for all 0 < γ <
α
β
and T > 0 there exists K(ω) such that
|X(t) − X(s)| ≤ K|t − s|γ
We check the hypothesis hold in the case of the brownian motion. Let
t > 0, then
Z
1
2m
|x|2m exp(−|x|2 /2(t − s))dx
E[|W (t) − W (s)| ] = p
2π(t − s) R
Z
(t − s)m
√
=
|y|2m exp(−|y|2 /2)dy
2π
R
= C|t − s|m
11
1.1. THE BROWNIAN MOTION
where we do the natural change of variables,
y=√
x
.
t−s
(1.5)
The change of variables is natural if we see the hypothesis (1.1).
The hypothesis is fulfilled with β = 2m and α = m − 1. Thus γ < αβ =
1
1
1
2 − 2m for all m and we conclude γ < 2 .
We prove the brownian motion has Hölder paths in [0, T ] with exponent
γ < 1/2. This result is optimal in the sense that any other γ ≥ 21 holds.
Indeed, if we have a Hölder estimation with γ = 1/2 then
sup
0<s<t<T
|W (t) − W (s)|
≤ C(ω) c.t.p.
|t − s|1/2
(1.6)
An inequality like the previous one is not possible, because of if we consider
a partition 0 = t1 < t2 ... < tn = T , we have
sup
0<s<t<T
|W (ti+1 ) − W (ti )|
|W (t) − W (s)|
≥ sup
1/2
(t − s)
(ti+1 − ti )1/2
i
We bound the original expression by independent and identically distributed
(with standard normal ditributions) random variables (we fix the times), and
then we can calculate explicitly the probability that the supremum is greater
or equal than a parameter L.
n
|W (ti+1 ) − W (ti )|
|W (t2 ) − W (t1 )|
P sup
→ 1, if n → ∞
≥ L = 1−P
≥L
(ti+1 − ti )1/2
(t2 − t1 )1/2
i
We want to point out this stochastic process has not bounded total
variation. If we have bounded total variation, given a partition, we will
have
n
X
i=0
|W (ti+1 ) − W (ti )|2 = max(|W (ti+1 − W (ti )|)
i
n
X
i=0
|W (ti+1 − W (ti )|
≤ V (0, T ) max(|W (t) − W (s)|)
i
and the last expression tends to zero because of the continuity of the paths
when we refine the partition. The contradiction is that the quadratic variation of the brownian motion is greater than zero, and then V (0, T ), the
total variation, can not be bounded.
We have proved the following result:
Theorem 2. The brownian motion has Hölder continuous paths with exponent γ < 21 . This exponent is optimal. In particular the paths are not of
bounded variation and they are nowhere differentiable a.e..
12
CHAPTER 1. FIRST RESULTS
A very important property of the stochastic processes we study is the
Markov property, which tell us the process has not memory of the previous
history. Or more precisely
Definition 2. Let X(t) be a stochastic process. It is a Markov process if
the following condition holds, given Fs the filtration generated by the process
(the history, see appendix B),
P [X(t) ∈ B|Fs ] = P [X(t) ∈ B|X(s)] a.e, ∀t > s
This is, supposing we can define the process X starting in x, we have the
process start again every time, without remembering where it was previously.
This is that the process X(t + s) has the same distribution of the process X
started in X(s).
Theorem 3. The brownian motion, W , is a Markov process.
Proof. We see that X(t) = W (t + s) − W (s), t ≥ 0 is an independent (of
W (t), 0 ≤ t ≤ s) brownian motion (started in the origin) because of the
third property in the definition of the brownian motion.2 And then W (t + s)
is a brownian motion started in W (s).
See [MP],[Du] for more properties. We continue with our problem, our
model of a particle suspended in a fluid and impacted randomly. Our first
model, the brownian motion, has a velocity defined nowhere and has not
bounded total variation in any interval. This is a problem from a phisical
point of view. We are going to present another model. We study the velocity and not the position of the particle. Let v(t) be the velocity of the
particle.The forces are the friction, which is proportional to the velocity, and
a random term which is the random hits. We will write the random term as
dW
dt and we will call it white noise. In some sense, if we are modelling the
velocity, this function will be the ’differential’ of the brownian motion (see
[Ev]).
Using the second Newtonian law, and writing the friction as −av(t)dt,
we have
dv(t) = −av(t)dt + bdW, v(0) = v0 .
(1.7)
This is the Langevin equation. The position is
dx(t) = v(t), x(0) = x0 .
This is the Ornstein-Uhlenbeck equation. Formally, we can solve the Langevin
equation as it would be an ODE and we can write its solution
Z t
−at
e−a(t−s) dW (s).
(1.8)
v(t) = e v0 + b
0
2
Given two processes X(t), Y (t) we say they are independent if for all times (ti , si )
we have the random vector (X(t1 ), .., X(tn )) is independent of the random vector
(Y (s1 ), ..., Y (sn )).
13
1.2. EXISTENCE AND UNIQUENESS THEOREM FOR SDE
Rt
We have to define the term 0 e−a(t−s) dW (s). This is not a Riemann integral, this is a stochastic integral in the Itô sense (see [Ev], [Du], [MP] and
appendix B).3
10
8
v(t)
6
4
2
0
−2
0
1
2
3
4
5
t
6
7
8
9
10
Figure 1.3: Some solution of the Langevin equation paths.
The ideas are that the object defined (1.7) (a stochastic differential equation) do not make any sense, this way of writing is only a formal one. Only
it makes sense when we write it in a integral way (and when we define the
Itô integral)
v(t) = v0 +
Z
0
t
−av(s)ds +
Z
t
bdW (s).
(1.9)
0
These solutions are stochastic processes and then, they are random processes. We can understand the SDE like an ODE at each ω. In the following
section we study when a problem like (1.7) is well possed and what properties
have the solution paths.
1.2
Existence and uniqueness theorem for SDE
~ 0 be a random variable, and W
~ a independent of X
~ 0 random motion.
Let X
The σ−algebra we consider is the generated by the initial random variable and the brownian motion, i.e.4
~ 0, W
~ (s) 0 ≤ s ≤ t}.
F(t) = Σ{X
3
There is other important fashion to define this kind of integrals, the Stratonovich
integral (see appendix B).
4
We write Σ{A} the σ−algebra generated by A. We save σ for the difussion matrix.
14
CHAPTER 1. FIRST RESULTS
2.5
2
1.5
1
0.5
0
−0.5
−0.5
0
0.5
1
1.5
2
Figure 1.4: A solution of the 2D-Langevin equation path.
We consider two functions
~b : Rd × [0, T ] → Rd
σ : Rd × [0, T ] → Md×m
where Md×m is the d × m matrix space.
The brownian paths are not smooth (in time) so we can not expect the
paths of the solution of a SDE will be smooth (in time).
Definition 3. A function f (s, ω) is progressively measurable if is measurable
in the set [0, T ]×Ω with respect to B×F, the minimum σ−algebra in [0, T ]×Ω
which contains the sets A × B with A in [0, T ] and B in Ω.
Definition 4. For the processes f (s, ω) we define the following spaces
1
L ([0, T ]) = {f (s, ω), E
Z
T
0
|f (s)|ds < ∞}.
For a general p we consider
p
L ([0, T ]) = {f (s, ω), E
Z
T
0
p
|f (s)| ds < ∞}.
~
Definition 5. A stochastic process X(t)
is a solution of the SDE
~ = ~b(X,
~ t)dt + σ(X,
~ t)dW
~ , X(0)
~
~0
dX
=X
if the following conditions hold
(1.10)
1.2. EXISTENCE AND UNIQUENESS THEOREM FOR SDE
15
~
1. X(t)
es progressively measurable.
~
2. ~b(X(t),
t) ∈ L1 [0, T ].
~
3. σ(X(t),
t) ∈ L2 [0, T ].
4.
~
~0 +
X(t)
=X
Z
t
~b(X(s),
~
s)ds +
0
Z
t
0
~
~ c.t.p. ∀ 0 ≤ t ≤ T
σ(X(s),
s)dW
(1.11)
To consider the first order equations is not a restriction, because a
n−order equation can be written as n first order equations.
To prove existence and uniqueness we use the sucessive aproximations
method, exactly the same as in the ODE case.
The existence and uniqueness theorem is
Theorem 4 (Existence and uniqueness). Let ~b and σ be Lipschitz functions
in the spatial variable and for all times in the [0, T ] interval
i.e. |~b(x, t) − ~b(x′ , t)| ≤ L1 |x − x′ |, ∀ 0 ≤ t ≤ T
|σ(x, t) − σ(x′ , t)| ≤ L2 |x − x′ |, ∀ 0 ≤ t ≤ T
~ 0 be an independent of the brownian motion considered random variable
Let X
2
in L [0, T ]. Then there is an unique process in L2 [0, T ] such that is a solution
of (1.10).5
We need some further results before the proof of the main theorem. We
give only the statement of the results, without proof (see [O] and [Ev]).
Lemma 1 (Gronwall inequality). Let φ be a nonnegative function supported
in 0 ≤ t ≤ T , and C0 , A be two constants. If we have
Z t
Aφ(s)ds ∀ 0 ≤ t ≤ T
φ(t) ≤ C0 +
0
then
φ(t) ≤ C0 exp(At).
Theorem 5 (Martingale inequality). Let X be a martingale and 1 < p < ∞,
then the following inequality holds
p
p
p
E(|X(t)|p ).
E max |X(s)| ≤
0≤s≤t
p−1
5
In the sense that if there are two solutions they are equal a.e..
16
CHAPTER 1. FIRST RESULTS
Lemma 2 (Chevichev inequality). Let X be a random variable, then for all
λ > 0 and p ∈ [1, ∞) the following inequality holds
P (|X| ≥ λ) ≤
E(|X|p )
λp
Lemma 3 (Borel-Cantelli). We write An i.o. for the set of elements which
appear in An infinitely often. If
∞
X
n=1
P (An ) < ∞
then
P (An i.o.) = 0
We give now the proof of the existence and uniqueness theorem.
~ yX
~ ′ be two solutions. Then, subProof of theorem 4. (Uniqueness) Let X
stracting them
~ −X
~ ′ (t) =
X(t)
Z
t
0
~b(X(t),
~
~ ′ (t), t)dt +
t) − ~b(X
Z
t
~
~ ′ (t), t)dW
~.
σ(X(t),
t) − σ(X
0
Thus
2
Z t
′
2
′
~
~
~
~
~
~
E(|X(t) − X (t)| ) ≤ 2E b(X(t), t) − b(X (t), t)ds
0
Z t
2 ′
~
~
~
+ σ(X(t), t) − σ(X (t), t)dW 0
We use Cauchy-Schwarz and the Lipschitz conditions to bound each
term.
2 2 Z t
Z t
′
′
~
~
~
~
~
~
~
~
E b(X(t), t) − b(X (t), t)ds
≤ TE
b(X(t), t) − b(X (t), t)
0
0
Z t
2
~
~ ′ (t)|2 )
E(|X(t)
−X
≤ L T
0
To bound the second quantity we use the Itô integral properties ([Ev],[Du]).
2 2 Z t
Z t
′
′
~
~
~
~
~
ds
σ(X(t), t) − σ(X (t), t)dW E σ(
X(t),
t)
−
σ(
X
(t),
t)
= E
0
0
Z t
~
~ ′ (t)|2 )ds
≤ L2
E(|X(t)
−X
0
1.2. EXISTENCE AND UNIQUENESS THEOREM FOR SDE
17
Considering the two previous inequalities
Z t
′
2
~
~ ′ (t)|2 )ds
~
~
E(|X(t)
−X
E(|X(t) − X (t)| ) ≤ C
0
We use the Gronwall inequality with
~
~ ′ (t)|2 ),
φ(t) = E(|X(t)
−X
C0 = 0
~ and X
~ ′ are the same a.e. for all time.
and we conclude that X
(Existence) We consider the approximations
Z t
Z t
n+1
n
~
~ n (s), s)dW
~
~
~
σ(X
X
(t) = X0 +
b(X (s), s)ds +
0
0
We use the following result (it proof, based in mathematical induction,
can be seen in [Ev]):
Let the ’distance’
~ n+1 (t) − X
~ n (t)|2 )
dn (t) = E(|X
Then
(M t)n+1
∀ n = 1, ..., 0 ≤ t ≤ T
(n + 1)!
~ 0 ).
for some constant M = M (L, T, X
We have, for the previous calculations, that
Z T
n+1
n
2
~ n (t) − X
~ n−1 (t)|2 dt
~
~
|X
max |X
(t) − X (t)| ≤ L T 2
0≤t≤T
0
Z t
2
n
n−1
~
~
~
+ max 2
σ(X (s), s) − σ(X
(s), s)dW dn (t) ≤
0≤t≤T
0
We use the theorem 5 and the previous result and we conclude
Z T
n+1
n
2
~ n (t) − X
~ n−1 (t)|2 dt
~
~
|X
E[ max |X
(t) − X (t)|] ≤ L T 2
0≤t≤T
0
Z T
2
~ n (t) − X
~ n−1 (t)|2 dt
+ 8L
|X
0
(M T )n
≤ C
n!
Using Chevichev inequality and Borel-Cantelli lemma we conclude
1
n+1
n
~
~
P max |X
(t) − X (t)| > i.o. = 0
0≤t≤T
2
~ n converges uniformly in [0, T ] to a process X.
~
Then in almost all ω, X
n+1
~
Passing to limits in X
definition and in the integrals we conclude that
the limit process is the (1.11) solution. See [Ev] and [O] for the proof of
the L2 belonging. This proof is based in the X n+1 (t) definition and in the
exponential series.
18
CHAPTER 1. FIRST RESULTS
A stochastic differential equation is a generalization of an ordinary equation, so we expected similarities in the proof. However, as the brownian motion is nowhere differentiable, we can not expect that a solution of a SDE
will be differentiable. The brownian smoothness is the maximum smoothness expected for a solution, i.e. Hölder-α with 0 < α < 1/2 in time. If the
hypothesis of the theorem hold we have Hölder-β with 0 < β < 1 in space.
To introduce the idea of a stochastic flow, which is the random version
of the ODE deterministic flow, we use a new parameter, s, the initial time,
~ t (x) for the solution of
and we write X
s
~
~
~
~,
dX(t)
= ~b(X(t),
t)dt + σ(X(t),
t)dW
~
X(s)
=x
(1.12)
~ ut (X
~ su (x)) = X
~ st (x) a.e. ∀ 0 ≤ s ≤ u ≤ t ≤ T, ∀x ∈ Rd
X
(1.13)
We have the flow property
The proof of this claim can be seen in the M.Gubinelli course notes in
his homepage or in [Ku].
We need an inequality to use the Kolmogorov theorem to be able to
study the regularity in the parameters. In [BF] we can see
~ st (x) − X
~ t′′ (x′ )|p ] ≤ C[|x − x′ |p + |s − s′ |p/2 + |t − t′ |p/2 ].
E[|X
s
(1.14)
If we consider x = x′ and we want to know the Hölder exponent in
time we use the Kolmogorov theorem (theorem 1) in the same fashion as
we did in the previous section. We conclude that, if we see the solution a
function in s (or in t) the Hölder exponent is γ < 1/2. To see that in space
is similar. We consider s = s′ , t = t′ . We use the Kolmogorov theorem and
we conclude that the exponent is γ < 1. Given the inequality (1.14), we
proof the following statement
Theorem 6 (Regularity). Given a stochastic differential equation with co~ st (x) be it solution.
efficients holding the hypothesis of the theorem 4. Let X
Then the following statements hold
~ t (x) is Hölder-γ with γ < 1/2.
1. s 7→ X
s
~ st (x) is Hölder-γ with γ < 1/2.
2. t 7→ X
~ t (x) is Hölder-γ with γ < 1.
3. x 7→ X
s
If the functions ~b and σ are more regular in space then we have more
regularity in space. Indeed, we have the following result
Theorem 7. Let the coefficients of a SDE be C k,α in x functions, then the
solution X0t (x) is C k,β in x with β < α.
We win nothing in time because of the brownian motion is an obstacle.
See [Ku] for the proof of this theorem.
19
1.3. THE WIENER MEASURE
Theorem 8. Given a stochastic differential equation with coefficients holding the hypothesis of the theorem 4. Then there exists c constant such that
two solutions with different initial values satisfy the following inequality
~ 1 (t) − X
~ 2 (t)|2 ] ≤ |x1 − x2 |2 ect .
E[|X
Proof. We apply the Itô formula (see appendix A) to the norm function,
~ 1 (t), X
~ 2 (t)) =
ρ2 (X
d
X
(X1i (t) − X2i (t))2 .
i=1
Then, we apply Gronwall inequality.
There are two types of stochastic equations, they are based in different
stochastic integrals. They are the Itô equations, who are based in the Itô
integral and they are the equations we use, and the Stratonovich. They are
based in the Stratonovich integral. See the appendix B for more details.
The stochastic equations can be understood as generalizations of Langevin
equation for a particle suspended in a fluid and randomly hitted. They are
diffusion6 .
Thinking in the solutions as diffusions we can expect that they will be
Markov process. See [O] for the proof.
We will see that a Markov process gives us a semigroup of operators.
But we need the Wiener measure.
1.3
The Wiener measure
In this section we study the Wiener measure, what is needed to define the
Markov semigroups in the next section.
The Wiener measure is induced by the brownian motion, if we under~ (ω, t), but like
stand the brownian motion not like a function W
~ : Ω 7→ C([0, T ], Rd )
W
Then we think in the brownian motion like a random variable with values
in a functional space. Indeed, let x be a point in Rd , then we consider the
following spaces
Cx ([0, T ], Rd ) = {f ∈ C([0, T ], Rd ), f (0) = x}
Cxy ([0, T ], Rd ) = {f ∈ C([0, T ], Rd ), f (0) = x, f (T ) = y}
(1.15)
(1.16)
The function ~b is the drift, σ is the diffusion term and the (1.10) solutions are Itô
diffusions.
6
20
CHAPTER 1. FIRST RESULTS
We can define a measure in the path space (1.15) if we consider a brown~ (t) = x+W
~ (t).
ian motion who started in x. Or we can consider the process V
For the second space, (1.16), the measure is called pinned because we
impose the final point.
To define both in a rigorous fashion we do the same.7 To simplify, we
consider the one dimensional case (d = 1) y x = 0.8 We consider the
following sets (we call them cylinders).9 Given times t1 , ...tn and Borel sets
in R B1 , ..., Bn we define the sets
1 ,...,Bn
ΠB
t1 ,...,tn = {f ∈ C0 ([0, T ], R), f (ti ) ∈ Bi }
(1.17)
We have to give them a probability in a correct fashion, and it is now when
the previous calculation (1.3) becomes useful because of we give them the
probability using it.
0.6
0.4
B1
0.2
0
B2
B3
−0.2
−0.4
B4
−0.6
−0.8
0
50
100
150
200
250
300
t
Figure 1.5: The cylinders.
7
A integration (in y) is the difference between them (see chapter 3)
We write W0 = W.
9
There are some ways to define the Wiener measure. We consider the cylinders and
the Kolmogorov extension theorem. See [GJ] for other proof.
8
350
21
1.3. THE WIENER MEASURE
,...,Bn
W(ΠtB11,...,t
)
n
=
Z
...
B1
Z
Bn
−|xn −xn−1 |
−|x1 |
1
√
e 2t1 ...e 2(tn −tn−1 ) dxn ...dx1
( 2πt)n
(1.18)
We see that for some time our Borel set is the whole space then this time
does not count, i.e. if in ti we have Bi = R then
B ,...B
,B
,...,B
n
1
i−1 i+1
1 ,...,Bn
W(ΠB
t1 ,...,tn ) = W(Πt1 ,...,ti−1 ,ti+1 ,...,tn )
This is a consequence of Chapman-Kolmogorov equations. If
p(t, x, y) = √
1
exp(−(x − y)2 /2t)
2πt
then the Chapman-Kolmogorov equations can be written this way
Z
p(s, x, z)p(t, z, y)dz
p(s + t, x, y) =
(1.19)
R
i.e., the probability of going from x to y in s + t is equal to the probability of
going from x to z in s and going from z to y in t after and if we consider
all possible z.
In order to use the Kolmogorov extension theorem we have to see that
similar set have the same measure. We have to see that if
A1 ,...,Am
1 ,...,Bn
ΠB
t1 ,...,tn = Πs1 ,...,sm
then
,...,Bn
,...,Am
)
W(ΠtB11,...,t
) = W(ΠsA11,...,s
m
n
This problem can be reduced to the case when one set of times and Borel
sets are contained in the other. Indeed, if the sets are the same we can
consider the intersection of both and une of them, i.e. we have the following
case
B1 ,...,Bn,A1 ,...,Am
1 ,...,Bn
ΠB
t1 ,...,tn = Πt1 ,...,tn ,s1 ,...,sm
And now we apply the property we talk before. If the sets are equal, in the
new times sj in the right-hand side of the previous equation we have that
the Borel sets, Aj , are the whole space. If it is not the case, there exists a
continuous functions such that at each previous and following times ti takes
value in the correct Borel set and in time sj takes value in Acj . So this is
a contradiction because of the sets are not the same. When the Borel sets
are the whole space we can conclude that the measure of both sets are equal
using the Chapman-Kolmogorov equations.
We use the Kolmogorov extension theorem, because of the consistency
conditions becomes true. Moreover, the measure is countably aditive in the
cylinders. Indeed, we can write
N
∞
W(∪∞
i=1 Ci ) = W(∪i=1 Ci ) + W(∪i=N +1 Ci )
22
CHAPTER 1. FIRST RESULTS
and we can observe that it is correct for a finite number of sets. We conclude
seeing that the second term tends to zero when N becomes greater because
of the set converging to the empty set. Recall that we only have to take care
about different sets at the same time. If we have different times our sets are
not intersection-free. Thus for a finite number of sets the additive property
is valid because of the properties of the integral.
So we have a measure in the σ−algebra generated by cylinders. However
it is not clear which σ−algebra is.
Let the set
A = {φ, f (s) < φ(s) < g(s)}
for certain functions given, f, g. Let si be the rational numbers in the [0, T ]
interval. Then we can write the set as
A=
∞ \
[
{f (si ) + 1/n < φ(si ) < g(si ) − 1/n}.
n=1 si
So we have a countable union of intersections of cylinders. Sets like the
previous one are in the σ−algebra, and so the Borel sets (with respect to
the uniform norm) are. Indeed, given ψ it suffices to take f = ψ − ε and
g = ψ + ε. Moreover, it can be shown that both σ−algebras are the same.
We write
~ (ω, t) = ω(t).
W
Then this measure gives us an expectation
Z
f (ω)dW.
E0 [f ] =
(1.20)
C0 ([0,T ],R)
We consider a brownian motion which started at the origin, so we write the
index zero. If we consider brownian motions which started at x we write
Z
f (ω)dWx .
(1.21)
Ex [f ] =
Cx ([0,T ],R)
This measure is supported on the continuous functions which at time
zero were at the point x.
If we fix some times, then we can change the functional integral by some
integrals in Rd .10 This is a consequence of the definition of the cylinders’
measure.
Z Y
n
~ (t1 )), ..., fn (W
~ (tn ))] =
fj (xj )p(tj −tj−1 , xj , xj−1 )dx1 dx2 , ...dxn
E[f1 (W
Rn j=1
With a time this is a well known formula. This is the heat semigroup. Thus
if
1
H0 = − ∆
2
10
We call path integral to the functional integral.
1.4. MARKOV PROCESS AND SEMIGROUPS OF OPERATORS
−tH0
e
f (x) =
Z
23
~ (t))] = E0 [f (x + W
~ (t))] (1.22)
p(t, x, y)f (y)dy = Ex [f (W
R
This is the first representation formula we obtain.
We can define the measure with respect to this operators.
~ (t1 )), ..., fn (W
~ (tn ))] = [e−t1 H0 f1 e−(t2 −t1 )H0 f2 ...e−(tn −tn−1 )H0 fn ](0)
E0 [f1 (W
We define the Wiener measure like the induced by the brownian motion,
but we can do the same for other process. For example, the measure in
(1.16) is not induced by the brownian motion, but by the brownian bridge
~ defined by this formula
X,
~ (t))
~
~ (t) + t (y − W
X(t)
=W
T
We can define the brownian bridge like the solution of this SDE
~
dX(t)
=
~
t − X(t)
~
dt + dW
T −t
with the same initial point as the brownian motion.
Thus we can define the measure (with measure of the total space equal
to p(T2 − T1 , x, y)) induced by the brownian bridge who in T1 is at x and in
T2 is at y with certain operators.
Z
Cxy ([T1 ,T2 ],R)
~ 1 ), f2 (X(t
~ 2 ), ...fn (X(t
~ n )dW x,y
f1 (X(t
[T1 ,T2 ] =
= [e−(T1 −t1 )H0 f1 e−(t2 −t1 )H0 f2 ...e−(tn −tn−1 )H0 fn e−(T2 −tn )H0 (·, y)](x) (1.23)
The diffusions give us measures in the continuous functions space with
respect to the σ−algebra defined by the cylinders11 . See [F] for more details.
1.4
Markov process and semigroups of operators
Let U ⊂ Rd be a domain. And given a Markov process X we define the
family of operator
~ t (x))] =
Tt f (x) = Ex [f (X
0
11
Z
f (y)P (t, x, dy)
(1.24)
Rd
Actually all stochastic process almost everywhere continuous give us a measure with
respect to the cylinders’ σ−algebra. We can generalize this idea to the well-posed (existence, uniqueness, continuity of the solution) ODE problems. In this case the measure is
singular, the measure is supported in the solution of the ODE.
24
CHAPTER 1. FIRST RESULTS
2
1.5
1
0.5
0
−0.5
−1
−1.5
−2
0
200
400
600
800
1000
1200
Figure 1.6: Paths of a brownian bridge.
where P (t, x, Γ) is the transition function, this function gives us the probability of arriving to Γ at time t for our process X started at x, i.e.
Z
t
~
p(t, x, y)dy
P (X0 (x) ∈ Γ) =
Γ
and p(t, x, y) is the transition density.
If f ≥ 0 then Tt f ≥ 0. Moreover if f ∈ L∞ we have Tt f ∈ L∞ and
||Tt f ||∞ ≤ ||f ||∞ .
Because of the Markov property and the conditional expectation properties Tt is a semigroup12 (see [Ev]),
~ t+s (x))|Σ(X(z),
~
~ t (X
~ s (x)))] = Tt f (X
~ s (x)).
E[f (X
0 ≤ z ≤ s)] = E[f (X
s
0
0
0
Taking expectations and appliying the properties of the conditional expectation we conclude
Ts+t f (x) = Ts Tt f (x).
So Tt is a contraction semigroup in L∞ . We define the domain as
D(Tt ) = {f ∈ L∞ , ||T f − f || → 0, if t → 0}.
12
There are many formulations of the Markov property. We use before a probability
~ is a
based definition, but now we use a expectation based one (see [Du],[F]). We say X
Markov process if
~ + h)|Σ{X(s),
~
~ + h)|X(t)].
~
E[X(t
0 ≤ s ≤ t}] = E[X(t
1.4. MARKOV PROCESS AND SEMIGROUPS OF OPERATORS
25
This set is a vector space (by the linearity of the expectation and the triangle
inequality). Moreover it is closed. Indeed, we have to observe that if
fn ∈ D(Tt ) → f, Tt fn → Tt f
then
||Tt f − f ||∞ ≤ ||Tt f − Tt fn ||∞ + ||Tt fn − fn ||∞ + ||fn − f ||∞ → 0.
So f ∈ D(Tt ).
We have that
t 7→ Tt f
is a continuous function. Indeed,
||Tt+h f − Tt f ||∞ ≤ ||Tt (Th f − f )||∞ ≤ ||Th f − f ||∞ → 0.
We have shown the following result
Theorem 9 (Semigroup). Given a Markov process, we have that
1. Tt is contraction semigroup in L∞ , with
D(Tt ) = {f ∈ L∞ , ||Tt f − f || → 0, si t → 0}
as domain. Moreover it is a closed vector space.
2. s 7→ Ts is a continuous map.
We give some examples in one spatial dimension.
Example 1:
We consider the ODE, written to conserve the previous notation as
dXt (x) = bdt.
Then, there is not randomness and so we can forget the expectations. In
this case the operator is
Tt f (x) = Ex [f (X0t (x))] = f (x + bt).
We see that it is a solution of the transport equation
ut = bux ,
u(0, x) = f (x).
In this case the semigroup’s generator is
A=b
∂
.
∂x
26
CHAPTER 1. FIRST RESULTS
Example 2:
We consider now the stochastic equation
dX = dW,
X(0) = x
with solution
X0t (x) = x + W (t).
We saw that
Tt f (x) = Ex [f (x + W (t))]
solves the heat equation with diffusion parameter equal to 1/2,
1
ut = uxx ,
2
u(x, 0) = f (x)
In this case the semigroup’s generator is
A=
1 ∂2
.
2 ∂x2
Example 3:
We can consider the stochastic equation
dX = bdt + dW,
X(0) = x
and then the semigroup is associated to
1
ut = bux + uxx .
2
And the semigroup’s generator is
A=b
1 ∂2
∂
+
.
∂x 2 ∂x2
In general we see that the semigroup’s generator,
Th f (x) − f (x)
h→0
h
A = lim
is the elliptic operator who appears in (A.3)13
Au =
d
d
X
∂u
∂2u
1 X
bi (x)
ai,j (x)
+
.
2
∂xi ∂xj
∂xi
i,j=1
13
i=1
If we consider smooth enough functions (see [F] or [Dy]).
1.4. MARKOV PROCESS AND SEMIGROUPS OF OPERATORS
27
To show it we take expectations in the Itô formula applied to f (x) ∈ C 2
with bounded derivatives. So we have the equation14
d
Tt f (x) = ATt f (x),
dt
T0 f (x) = f (x).
For more details see [Du], [App].
Remark 2 We have that A has classical sense for the functions f ∈
Cb2 = C 2 ∩ L∞ . Moreover Cb2 ⊆ D(A) ⊂ D(Tt ) ⊂ L∞ . The domains depends
on d, for example if d = 1 D(A) = Cb2 , but for d > 1 it is bigger.
We recall that we have continuity in t, but for certain functions, f ∈
2
Cb , we have differentiability in t. We want the differentiability in x, and
this smoothness is because of the smoothness of the coefficients of the SDE
considered.
Definition 6. A semigroup is a Feller semigroup if f ∈ C(U ) ∩ L∞ (U ) =
Cb (U ) implies Tt f (x) ∈ Cb (U ).
We have the following result:
Theorem 10. If the coefficients of the SDE are as in the theorem 4 and
bounded, then the semigroup is a Feller semigroup.
Proof. Let x, y be two initial values of the SDE with coefficients as in the
theorem 4. Then
~ 0t (x))−f (X
~ 0t (y))|] ≤ ε+2||f ||∞ P (|X
~ 0t (x)−X
~ 0t (y)| > δ).
|Tt (f (x)−Tt f (y))| ≤ E[|f (X
We can write this expectation as the sum of two terms; if the arguments
X0t (x), X0t (y) are close (the continuity of f gives us that the distance of the
~ t (y))| are less than ε) or if they are not. The
~ t (x)) − f (X
two values |f (X
0
0
second term is the probability of that the arguments are not close enough.
In this second case we can bound
~ 0t (x)) − f (X
~ 0t (y))| ≤ 2||f ||∞ .
|f (X
We can apply the Chivechev lemma with p = 2 and the theorem 8 to bound
the last term with powers of |x − y| and time-dependent constant. This
implies the continuity. Indeed,
~ t (x) − X
~ t (y)| > δ) ≤
P (|X
0
0
14
ct
1
~ t (x) − X
~ t (y)|2 ] ≤ e |x − y|2 .
E[|
X
0
0
δ2
δ2
For all f ∈ D(A) = Cb2 ⊂ C 2 ∩ L∞ ⊂ D(Tt ) = Cb
28
CHAPTER 1. FIRST RESULTS
So Tt f (x) is continuous in x, and then we have some regularity for the
solution of the equation
d
Tt f (x) = ATt f (x), T0 f (x) = f (x)
dt
with f ∈ Cb and the coefficients of the SDE bounded and Lipschitz.
If we suppose that the coefficients of the SDE are C 2,α and the function
is f ∈ Cb2 then Tt f (x) ∈ C 2 (Rd ). This is the theorem 7.
This result can be optimized. The Bismut-Elworthy-Li formula (see [Du],
[Ku]) gives us the differential of Tt f (x) without the derivatives of f . To fix
details,
~ t (x)
Theorem 11 (Bismut-Elworthy-Li formula). Let f ∈ Cb , a diffusion X
0
2,α
d
with coefficients Cb and ~v , w
~ two directions in R .The derivative in the ~v
direction of Tt f (x) is
C|~v |
∇~v Tt f (x) ≤ √ ||f ||∞ .
t
For the second derivatives the formula is,
∇~v,w~ Tt f (x) ≤
C|~v ||w|
~
||f ||∞ .
t
So we only need f ∈ Cb and coefficients of the SDE in Cb2,α for the
classical meaning of the generator.15
We recall that p in the Itô diffusion case is the fundamental solution of
the parabolic problem, as we expected from the heat equation.
Remark 3 We need a Markov process to have a semigroup but not
an Itô diffusion. We can have Lévy processes with jumps. This gives us
non-local equations and fractional operators.
15
We write Cb2 for the space of functions with 2 bounded derivatives. We write Cb for
the space C(U ) ∩ L∞ (U ).
Chapter 2
Representation formulas for
elliptic equations
We start with a definition. We need this definition to study of the bounded
domain case. We impose Dirichlet boundary conditions1
Definition 7. A stopping time with respect to a filtration, F(t), is a random
variable
τ : Ω 7→ [0, ∞]
who fulfills
{ω, τ (ω) ≤ t} ∈ F(t), ∀t ≥ 0.
We have the following results:
Proposition 1. Given τ1 and τ2 stopping times with respect to the same
filtration. Then
1. {ω, τ1 < t} y {ω, τ1 = t} are in the filtration for all time t.
2. min(τ1 , τ2 ) y max(τ1 , τ2 ) are stopping times.
See [Ev] for the proof.
We are interested in the least time of hitting a given set.
~
Proposition 2. Let X(t)
be the solution of (1.11) with the conditions of
the theorem 4 hold. Let E a given closed or open and non-empty set in Rd .
Then
~
τ = inf{t ≥ 0|X(t)
∈ E}
is a stopping time.
1
Neumann boundary conditions are not studied in this text, but the idea is to define
a diffusion reflected in the boundary (see [R]).
29
30
CHAPTER 2. ELLIPTIC EQUATIONS
The relationship between the Itô integral and this random variables who
are times is clear. The stopping times will be integration limits. See [Ev]
for the proof of the following results.
Proposition 3. If G ∈ L2 [0, T ] and 0 ≤ τ (ω) ≤ T is a stopping time then
the integral
Z T
Z τ (ω)
G1t≤τ (ω) dW
GdW =
0
0
fulfills the following properties
1.
E
Z
τ (ω)
GdW
0
2.
E
Z
τ (ω)
GdW
0
2 =E
=0
Z
0
τ (ω)
2
G dt
This result is a consequence of the result with deterministic times (see
appendix B).
We have an Itô formula with stopping times (see appendix A). See [Ev]
for the proof.
These stopping times τ = τx are random variables with sample points
in the continuous functions space. This is because the diffusion takes value
in the continuous functions, and a continuous function (random function
because we do not know which function we will obtain) have its own value
τx .
2.1
The laplacian
There are two approaches to the representation formulas. We can start
knowing that a certain PDE has a classical solution and conclude that this
solution is an expectation. On the other hand we can start with a function,
who is an expectation, and show that this function is smooth enough to be
a classical solution of an associated PDE. For the moment we consider only
the first approach, so we have a classical solution of a PDE and we obtain
a probabilistic representation formula.
We saw that the generator A associated to the brownian motion is
1
−H0 = ∆.
2
by
So we consider the family of equations in a smooth domain U ⊂ Rd given
2
1
− ∆u(x) = f (x) if x ∈ U,
2
2
u(x) = g(x) if x ∈ ∂U
The smoothness condition of the boundary of U can be optimized. See [Du].
(2.1)
31
2.1. THE LAPLACIAN
We consider the case f = 0 and g a continuous function. Then we look
for a function for the harmonic functions.
Our problem is
1
− ∆u(x) = 0 si x ∈ U,
2
u(x) = g(x) si x ∈ ∂U
(2.2)
We apply the Itó formula with integration limit the stopping time
~ (t) ∈ ∂U )
τx (ω) = inf(t|x + W
to the stochastic process (where u is the solution of (2.1))
~ t (x)) = u(x + W
~ (t)).
u(X
0
Recall that A = 1/2∆ is the generator. So,
Z
Z τx
τx
~
Auds +
u(X0 (x)) − u(x) =
0
τx
0
~.
∇u · σdW
If we take the expectation we observe that the stochastic integration term
disappear (see appendix A)
Z τx
~
~
Auds .
Ex [u(X(τx ))] − Ex [u(X(0))] = Ex
0
But
~ x ))] = Ex [g(X(τ
~ x )],
Ex [u(X(τ
~
Ex [u(X(0))]
= u(x),
Ex
Z
τx
Auds = 0
0
and we obtain the following result
Theorem 12 (Kakutani). Let U ⊂ Rd be a domain and g a continuous
function defined in the boundary of U , then the function, u, solution of
(2.2) hold the following condition
~ x ))].
u(x) = Ex [g(X(τ
(2.3)
We have to see that the brownian motioncan not be inside U during
infinite time. To see that we suppose that U is contained in the semispace
{x1 < a}
for certain a. Then we have
a
τx < τ1,x
where the subindex indicates the component and
a
τ1,x
= inf(t|X1 = x1 + W1 (t) = a).
32
CHAPTER 2. ELLIPTIC EQUATIONS
Thus
a
a
a
P (τ1,x
≤ t) = P (τ1,x
≤ t, x1 + W1 (t) < a) + P (τ1,x
≤ t, x1 + W1 (t) ≥ a)
a
= 2P (τ1,x
≤ t, x1 + W1 (t) ≥ a)
= 2P (x1 + W1 (t) ≥ a)
because of the definition of stopping time and the symmetry of the brownian
motion. This integral can be calculated explicitly and taking the limit when
t → ∞ we obtain that the probability is 1 because of being two times the
integral of the standard normal distribution
√ in the positive semiaxis. Indeed,
if we do the change of variables x = y/ t, we obtain
r Z ∞
2
2
2P (x1 + W1 (t) ≥ a) =
e−y /2 dy.
π √a
t
Using this theorem we can obtain the mean value property.
Theorem 13 (Mean value property). If u is harmonic then we have
Z
1
u(y)dy.
u(x) =
|∂B(x, r)| ∂B(x,r)
Proof. We use the previous theorem, the isotropy of the brownian motion
and that the total measure has to be one.
Other consequence is that in the case
g(x) = 1Γ
with Γ ⊂ ∂U , the function u(x) is the probability of hit Γ.
Proposition 4. Let u be the solution of (2.2) with g(x) = 1Γ then we have
~ τx (x) ∈ Γ).
u(x) = P (X
0
Now we are interested in the second approach we said before to the
harmonic functions. We show that if we define
~ τx (x))]
v(x) = Ex [g(X
0
we have a classical solution of the PDE considered.
It is a well-known fact that the mean value property is true for a harmonic
function. Moreover if the function who verifies the mean value property is
continuous then the function is harmonic. In the chapter 1 we saw that the
semigroup is continuous in x so if we show that the mean value property
holds for the v we are done. To do this we have the previous result.
In the case x ∈ ∂U we have τx = 0, so the boundary condition holds.
We have proof the following result
33
2.1. THE LAPLACIAN
Theorem 14. Let u solution of (2.1) with f = 0, the we have
~ x ))].
u(x) = Ex [g(X(τ
(2.4)
And conversely, let u be the function defined in the previous equation with a
certain g in the boundary of a given domain, then u is a classical solution
of (2.1).
If we consider the case with g = 0 and f a Cbα function we have
Theorem 15. If u is the classical solution of (2.1) with g = 0 and f ∈
Cbα (U ) then
Z τx
s
~
f (X0 (x))ds .
u(x) = Ex
0
Proof. The proof is similar to the previous one. We have to observe that
the boundary term in the Itô formula is zero. We claim that E[τx ] < ∞
(see the following section for the proof). This fact and that we have f ∈
L∞ (U ) ∩ C(U ) gives us the finiteness of the integral.
We can combine the previous result to obtain a formula for the complete
problem (2.1)
Theorem 16. Let g be a continuous function and f ∈ Cbα (U ). Then a
classical solution u of (2.1) verifies
Z τx
~ 0s (x))ds + Ex [g(X(τ
~ x ))].
f (X
u(x) = Ex
0
Proof. It is enough to put together the previous results.
Now we are interested in the steady Schrödinger equation (in units such
that the constants are unitary and we suppose that the function is real).
1
− ∆u(x) + c(x)u(x) = f if x ∈ U,
2
u(x) = 0, if x ∈ ∂U
(2.5)
The function c(x) (the potential) is positive and Lipschitz. The function
f is Hölder and bounded.
To impose a sign to c is to avoid an eigenvalues problem (so with nonuniqueness).
We have the following result:
Theorem 17 (Feynman-Kac). The solution of (2.5), with f and c satisfying
the previous hypothesis is given by
Z τx
R
~ s (x))ds
t
− 0t c(X
~
0
u(x) = Ex
f (X0 (x))e
dt .
(2.6)
0
34
CHAPTER 2. ELLIPTIC EQUATIONS
Proof. We have E[τx ] < ∞ (see the following section for the proof), and f
and c are bounded funtions we have bounded the previous integrals. Let u
be the solution of (2.5) and consider
Rt
~ 0t (x))e−
R0t (x) = u(X
0
~ t (x))ds
c(X
0
.
We consider also the processes
Z0t (x) = −
and
Z
0
t
~ 0s (x))ds
c(X
t
Y0t (x) = eZ0 (x) .
We want to apply the Itô formula.
The differentials of these processes are
~ 0t (x))dt
dZ = −c(X
and
~ 0t (x))Y0t (x)dt.
dY = −c(X
Appliying the product rule (see appendix A) to R0t (x) we obtain
Rt
t (x))ds
~
c(
X
t
−
~ (x)e 0
~ t (x)))Y t (x) + u(X
~ t (x))dY.
0
d u(X
= (du(X
0
0
0
0
We apply the Itô formula (see appendix A) to u and we obtain
R
1
~ t (x))ds
t
− 0t c(X
~
~ 0t (x))dt +
0
d u(X0 (x)e
=
∆u(X
2
d
X
~ t (x))
∂u(X
t
t
0
~ (x))(−c(X
~ (x))dt) Y t (x).
dWi + u(X
0
0
0
∂xi
i=1
Now we integrate in (0, τx ) and we take expectations. Thus
R
~ τx (x))ds
τx
− 0τx c(X
~
0
Ex u(X0 (x)e
− Ex [u(x)] =
Z τx 1
t
t
t
t
~
~
~
∆u(X0 (x)) − c(X0 (x))u(X0 (x)) Y0 (x)dt .
= Ex
2
0
We conclude using the boundary conditions and the equation.
Z τx
R
~ s (x))ds
t
− 0t c(X
~
0
dt
f (X0 (x))e
u(x) = Ex
0
2.2. POISSON EQUATION AND SHAPE RECOGNITION
35
This equation is diffusion with killing. We consider a brownian particle
~ t (x))h be the probability of disappearing in
who can disappear. Let c(X
0
(t, t + h) interval. Then the probability of survival until time t is approximated by
~ t1 (x))h)(1 − c(X
~ t2 (x))h)...(1 − c(X
~ tn (x))h)
(1 − c(X
0
0
0
where ti is a partition of the (0, t) interval with h as the step. As we take
the limit h → 0 this probability converges to the exponential
e−
Rt
0
~ s (x)ds)
c(X
0
.
And so u is the mean of f in the brownian paths with diffusion hitting the
boundary of U .
2.2
Poisson equation and shape recognition
This section is based in the paper [GGSBB], however, the arguments in this
paper are with random walks. In the limit appears a constant, and they do
not care about. This constant is 12 . The idea is that in the limit we consider
a brownian motion with generator 1/2∆.
We consider a silhouette with a simple closed curve as boundary.
The ’classical’ fashion of obtaining properties is to assign, for each point
x in the silhouette, a value that gives us the position with respect to the
boundary. The popular way to do that is considering the distance. In this
work we consider a different one. We consider brownian particles started at
each point and we measure the expected time of hitting the boundary. As
we mention before, in the paper [GGSBB] they argue with random walks
and a discretized laplacian.
~ t (x) = x + W
~ (t), for each interior
We consider the brownian motion X
0
point x. We consider the equation
1
∆u = −1
2
with homogeneus Dirichlet boundary conditions. With the previous section
notation we have the case f = 1 and g = 0. We obtain a 2 that is not in
the paper, but the authors mention that a constant appears and they take
this constant as the unity. Let τx = inf{t|X0τx (x) ∈ ∂S}, where S is the
silhouette (domain).
Theorem 18. For the classical solution of the previous equation we have
u(x) = E[τx ].
(2.7)
36
CHAPTER 2. ELLIPTIC EQUATIONS
Proof. We apply the Itô formula (we can because u is regular enough) to
~ min(τx ,n) (x)), and after taking expectations we obtain
the process u(X
0
Z min(τx ,n)
1
~ min(τx ,n) (x))] − Ex [u(x)] = Ex
~ 0s (x))ds .
Ex [u(X
∆u(
X
0
2
0
We use the boundary conditions and the equation to obtain
Z min(τx ,n)
min(τx ,n)
~
1ds = −Ex [min(τx , n)].
Ex [u(X0
(x))] − u(x) = −Ex
0
We show before that P (τx < ∞) = 1, but this does not give the integrability. To see that we have the integrability of τx we have to use the properties of u. We have that u is bounded and so limn→∞ Ex [min(τx , n)] < ∞.
Figure 2.1: Numerical experiment, silhouette and u.
We know that u is as regular as the boundary allows and it is positive.
The positivity property can be obtained only seeing the expectation.
The level set of u gives us smooth approximations of the boundary. In
the paper [GGSBB] the authors mention other properties of u (existence
and uniqueness of solutions with other boundary conditions, the mean value
property...).
2.3. A GENERAL ELLIPTIC EQUATION
37
We can use u to divide the silhouette in parts with thresholding. But
with this method we can lose information in the periferic parts of our silhouette.
To solve this problem (loss of information) we consider
Φ(x) = u(x) + |∇u(x)|2 .
The most important properties of Φ is that the high values indicate concavities (the gradient is big there) and that we can divide the silhouette without
loss of information if we use this function.
To detect concavities we can use Φ but there is a better way. We define
the function
∇u
.
Ψ(x) = −∇ ·
|∇u|
Remark 4 This operator is the 1−laplacian. We will study in a deeper
way in the chapter 5.
The high values (in absolute value) indicate the curves in the silhouette.
So we can obtain the corners of the silhouette. The negative values of
Ψ indicate concavities. As the value is more negative, more ’pointed’ is the
concavity. And conversely, the high positive values indicate convexities.
To find the ’skeleton’ of our silhouette we define the function
Ψ̃ = −uΨ/|∇u|.
And we use the threshold method.
We want to discern between different silhouettes. To do that we consider
the ’decision trees’ (see [GGSBB]).
We give an example.
We see (Figure 2.2) the high values of Φ close to the chimney of the
house, this is the concavities zone. We also see that Ψ detect the concavities
and the convexities (lower corners), and the ’skeleton’, the high value in the
center of the house.
Remark 5 All the programs we use have a time counter. When we run
the programs in my PC (1.73 GHz) with 10−4 as the tolerance marks 20
seconds. But this is a small image, 50 × 150 pixels.
2.3
A general elliptic equation
We can obtain representation formulas for other elliptic operators. We are
going to study
d
d
X
∂u
∂2u
1 X
bi (x)
ai,j (x)
+
+ c(x)u = f si x ∈ U, (2.8)
−Ã = −
2
∂xi ∂xj
∂xi
i,j=1
i=1
38
CHAPTER 2. ELLIPTIC EQUATIONS
Figure 2.2: Results, Φ in the upper figure, Ψ in the lowe figure.
Cb2,α
u = g iff x ∈ ∂U
where ai,j , bi , c are in
(for α ∈ (0, 1]), c is positive and the characterPd
istic form − i,j=1 ai,j λi λj is positive. We can apply the sema idea to this
operator.3 In physics, this operator is a hamiltonian. c is the potential. Let
g be a continuous and bounded funtion and f ∈ Cbα .
Moreover we suppose
(ai,j )(x) = σ(x)σ t (x)
with σ(x) a Cb2,α matrix. σ there exists if the determinant of a never vanishes.4
It is possible that there is no unique σ and so there are more than one
diffusions, however this is not a problem because the measures in (1.15)
induced by the respectives diffusions are the same. Thus the expectation is
well defined (see [F] for the proof).
Given an equation as (2.8) the diffusion we consider is the solution of5
~ 0t (x) = ~b(X
~ 0t (x))dt + σ(X
~ 0t (x))dW
~.
dX
Let τx be a stopping time as defined in the previous section.
3
We need to impose conditions to have a classical solution (see appendix A).
This result is not optimal (see [F]).
5
~ 0t (x) is the value of the solution of the SDE started in x at time
Recall the notation, X
4
t.
39
2.4. UNBOUNDED DOMAINS
If the previous regularity conditions for the coefficients of (2.8) hold
then we have the following results (the proofs are similar to the proofs in
the laplacian case):
Theorem 19. Let g be a continuous function and f ∈ Cbα (U ). We suppose
c = 0. Then the solution, u, of (2.8) is
Z τx
~ s (x))ds + Ex [g(X(τ
~ x ))].
f (X
u(x) = Ex
0
0
Theorem 20 (Feynman-Kac). The solution of (2.8), with g ∈ Cb , f ∈
Cbα (U ) and c bounded, positive and Lipschitz is given by
Z τx
Rτ
Rt
~ x ))e− 0 x c(X~ 0s (x))ds .
~ 0t (x))e− 0 c(X~ 0s (x))ds dt + Ex g(X(τ
f (X
u(x) = Ex
0
(2.9)
We can apply these methods to the ’iterated’ operators, for example the
bilaplacian with the correct boundary conditions,
∆∆u = 0, u = g1 , ∆u = g2 .
Indeed, we can write the equation as
∆v = 0, v = g2
∆u = v, u = g1
and we apply the same previous idea.
2.4
Unbounded domains
Now we are interested in elliptic operator in unbounded domains. For example we are interested in the Schrödinger equation, (2.5), in the whole
space.
We can not argue with stopping times, because there is not boundary to
hit. However we can see the Schrödinger equation in a disc with radious R
centered at the origin (we write this domain as UR ) and take the limit in R.
We expect that lim UR = Rd and τx → ∞.
So, if we consider the 3D case for the laplacian,
−∆u = f if x ∈ R3
we know that the Green operator (see [Ev2]) is
G(x, y) =
1
1
4π |x − y|
40
CHAPTER 2. ELLIPTIC EQUATIONS
and that u is the convolution with G. If our idea is correct, our diffusion,
with transition density p(t, x, y), has to fulfil that the time integral will be
the Green operator.
Indeed, we have
Z
∞
p(t, x, y)dt = G(x, y).
0
To see that we do the following change of variables
t=
|x − y|2
2s
in the integral.
The same is valid for the elliptic operator (2.8).
We obtained the well-known results with different methods, and sometimes (Schödinger or Navier-Stokes equations) give new intuition to the
problem.
These methods can be applied to give properties of this equations. For
example, in [CR] they proof an Harnack inequality for (2.5).
Summarizing, we have a representation formula of the classical solution
(we know that this classical solution exists) for the elliptic problem (2.8) as
an integral in (1.15) and being U a bounded or unbounded domain. The
advantages of these methods is the new intuition and the new numerical
methods.
Chapter 3
Representation formulas for
parabolic equations
3.1
A general parabolic equation
In the first chapter we studied that a Markov process defines a contraction
semigroup on L∞ . In the case where the Markov process is an Itô diffusion
the generator of the semigroup is an elliptic operator, and so the function
Tt f (x) solves a parabolic equation.1
Remark 6 We suppose f ∈ Cb2 (Rd )∩L1 (Rd ). Then, using interpolation,
we have f ∈ Lp for 1 ≤ p ≤ ∞.
We know that Tt f (x) is C 1 in time and that the two derivatives in space
depend on the coefficients regularity.2 We consider the SDE coefficients
Cb2,α, so the stochastic flow has two spatial derivatives (theorem 7).
In this chapter we consider the Cauchy problem for general linear parabolic
equations, however, if we consider stopping times we can do the same for
initial-boundary problems (if we suppose a smooth domain, or at least with
the interior cone property, see[Du]).
In the first chapter we saw that if f ∈ Cb2 ∩ L1 (Rd ) and we consider the
heat equation
ut (t, x) = −H0 u(t, x) =
1
∆u(t, x),
2
u(0, x) = f (x)
we have, in the notations used in the text,
~ t (x))] = E0 [f (x + W
~ t (0))] = e−H0 t f (x)
u(t, x) = Tt f (x) = Ex [f (X
0
0
and these expectations are with respect to the Wiener measure. So, we are
integrating in functions. We see that the kernel is the integral with respect
1
We consider that the semigroup takes function in f ∈ Cb2 . This result is not optimal,
see theorem 11.
2
We can write the derivatives of f with the Bismut-Elworthy-Li formula (theorem 11).
41
42
CHAPTER 3. PARABOLIC EQUATIONS
to the brownian bridge measure in the functions,3 i.e.
Z
dWxy .
p(t − s, x, y) =
Cxy [s,t]
We need another integration to obtain the measure induced by the brownian
motion
Z
Z
dWx .
p(t, x, y)dy =
Cx [0,T ]
R
Remark 7 Using the theorem 11 and the theorem 7 we can conclude that if f ∈ Cb and our diffusion X0t (x) has C ∞ (Rd ) coefficients, then
Tt f (x) ∈ C ∞ (Rd ). For example, this occurs with the heat equation.
Remark 8 In the third section of this chapter we will write
p(t − s, x, y) = W (s, x, t, y).
This is because we want to conserve in the possible the Feynman notation.
Remark 9 If we do not have an unique diffusion (we have many) for a
parabolic problem this is not a problem, because the measures induced are
equivalent. And so the expectation is well-defined (see [F]).
Let4
d
d
X
∂
∂2
1 X
bi (x)
ai,j (x)
+
− c(x)
à = A − c(x) =
2
∂xi ∂xj
∂xi
i=1
i,j=1
be an elliptic operator with Cb2,α coefficients. Moreover we suppose c(x) ≥
0.5 We suppose that A the generator of an Itô diffusion, this is
(ai,j (x)) = σ(x)σ t (x)
for a Cb2,α matrix σ. The stochastic flow defined by the SDE
~
~
~
~,
dX(t)
= ~b(X(t))dt
+ σ(X(t))d
W
X(0) = x
is C 2 in the spatial variable (see theorem 7).
Given a parabolic equation
∂u(t, x)
= Ãu(t, x),
∂t
u(0, x) = f (x).
(3.1)
We have the following results:
3
This is true for a general diffusion.
The sign of c is to have a contraction semigroup on L∞ . We remarck that the coefficients of the elliptic operator are time-independent.
5
If we take a negative c, we can obtain non-trivial statonary solutions. These solutions
are related with the eigenvalues of the elliptic problem.
4
43
3.1. A GENERAL PARABOLIC EQUATION
Theorem 21. Given the equation (3.1) fulfilling the previous hypothesis and
with c = 0. Let f ∈ Cb2 (Rd ) and let u(t, x) be its classical solution. Then we
have
~ 0t (x))] = eAt f (x)
u(t, x) = Tt f (x) = Ex [f (X
where the measure is the induced by the diffusion measure. And conversely,
if we define v(t, x) = Tt f (x) then v verifies the equation (3.1) in the classical
way.
Proof. Fix t. We apply the Itô formula (we can because of the smoothness
~ s (x)),
of u) to u(t − s, X
0
Z s
Z s
∂
r
s
~ 0r (x))·σdW
~.
~
~
∇u(t−r, X
+A u(t−r, X0 (x))dr+
u(t−s, X0 (x)−u(t, x) =
∂r
0
0
We observe that
∂
∂
=− .
∂r
∂t
No we take s = t and we take expectations, obtaining
~ 0t (x))] − u(t, x) = 0
Ex [f (X
and so
u(t, x) = Tt f (x).
Now let v(t, x) be a function defined as in the statement. In the chapter
1 we studied that v(t, x) is C 1 in time. We deduce that v ∈ C 2 spatially
because of the smoothness of the SDE coefficients (see [Ku] or the theorem
7) and the Bismut-Elworthy-Li formula (theorem 11). Obviously hte initial
condition holds. Finally we have to prove that the equation holds. We
studied that the generator of the diffusion is our elliptic operator A, and so
the semigroup solves the parabolic equation.
If we consider c(x) ≥ 0, then we have the Feynman-Kac formula (now in
the parabolic case). Given the parabolic problem
∂u
= Ãu,
∂t
u(0, x) = f (x)
(3.2)
then
Theorem 22 (Feynman-Kac (parabolic case)). Given the equation (3.2)
fulfilling the previous hypothesis and with c ≥ 0. Let f ∈ Cb2 (Rd ) and let
u(t, x) be the classical solution of this problem. Then
~ t (x))e−
u(t, x) = T̃t f (x) = Ex [f (X
0
Rt
0
~ s (x))ds
c(X
0
where the measure is the induced by the diffusion measure.
].
44
CHAPTER 3. PARABOLIC EQUATIONS
Proof. Fix t. We consider the processes
Z r
r
~ s (x))ds,
c(X
Z0 (x) = −
0
0
r
Y0r (x) = eZ0 (x)
with differentials
~ 0r (x))dr,
dZ0r (x) = −c(X
~ 0r (x))Y0r (x)dr.
dY0r (x) = −c(X
~ r (x))Y r (x) is
Then the differential of the product u(t − r, X
0
0
~ 0r (x))Y0r (x)) = d(u(t − r, X
~ 0r (x)))Y0r (x) + udY0r (x).
d(u(t − r, X
By the Itô formula applied to u we have
~ r (x))
∂u(t − r, X
0
~ 0r (x)) dr + ∇u(t − r, X
~ 0r (x)) · σdW
~.
−
+ Au(t − r, X
∂t
If we introduce this into the previous equation we obtain
~ r (x))
∂u(t − r, X
r
0
r
r
~
~
+ Au(t − r, X0 (x)) dr +
d(u(t − r, X0 (x))Y0 (x)) =
−
∂t
~ 0r (x)) · σdW
~ Y0r (x) − u(t − r, X
~ 0r (x))c(X
~ 0r (x))Y0r (x).
∇u(t − r, X
We integrate until r = t and we take expectations. The result is
Z t
R
~ r (x))
∂u(t − r, X
~ s (x)ds
0
r
− 0r c(X
~
0
T̃t f (x)−u(t, x) = E
−
+Ãu(t−r, X0 (x)) e
dr = 0.
∂t
0
Remark 10 We point out that the PDE who appears with the Itô
formula with the SDE as in the first chapter is backward in time, i.e.
∂u
+ Au
∂r
To obtain the ’correct’ direction for the time we consider u(t − r, x).
However we this is not the unique way. In [Ku] we can see that a backward
SDE with final datum x gives us the equation
∂u
− Au
∂t
~ t (x), this is our ’initial’
~ t (x)) particles in X
So the idea is we have f (X
0
0
point. These particles move according to the SDE considered.6 Thus
u(t, x) = Tt f (x)
6
We recall that in our statement of the SDE (chapter 1) this was our final point, but
the intuition comes if we consider this point the initial point.
45
3.1. A GENERAL PARABOLIC EQUATION
is the expected amount of particles in the ’final’ point x at time t.
We can understand the Itô diffusions as ’characteristic curves’. And
~ t (x) and
so, given x, t we calculate u(t, x) looking f in the original point X
0
taking expectations. The same as in the deterministic characteristic curves
case.
Remark 11 The original problem for (3.2) with ai,j = δi,j , ~b = 0 is
quantum mechanics, so commonly we can see
1
∂u(t, x)
= ∆u(t, x) − V (x)u(t, x),
∂t
2
u(0, x) = f (x)
We see that
||T̃t f (x)||∞ ≤ ||f (x)||∞ e−t||c(x)||∞ ≤ ||f (x)||∞
so T̃t verifies the same properties that Tt semigroups.
Moreover, from mass conservation
Z
Z Z
Z
t
p(t, x, y)f (y)dydx =
Ex [f (X0 (x))]dx ≤
Rd
Rd
Rd
f (y)dy
Rd
and from being a contraction semigroup on L∞ we conclude
1
1− 1
||Tt f (x)||p ≤ ||f ||1p ||f ||∞ p , ∀p ∈ [1, ∞], ∀t ≥ 0.
If we have a result about the decay in the L∞ norm the probability density
then we show the decay rate in Lp for all p. In [Fr] (Theorem 4.5) we have
the following bound
C
|p(t, x, y)| ≤ d/2 .
t
Another proof of the result is to consider an adapted to the advection coordinates. If we consider the deterministic characteristic curves
Z t
~b(Y
~
~ (s))ds
Y (t) =
0
and we define
~ (t))
v(t, x) = u(t, x − Y
thus
d
X
i,j=1
ai,j (x)uxi ,xj =
d
X
ai,j (x)vxi ,xj
i,j=1
but
~ ′ (t) = ut − ux~b(x).
vt = ut − ux Y
46
CHAPTER 3. PARABOLIC EQUATIONS
We see that v solves the ’heat’ equation (with a coefficients ai,j (x)) and
we have ||u||∞ = ||v||∞ , so we show the decay (see [I] for the fundamental
solution for this problem) and we conclude
1
1− 1p
||Tt f (x)||p ≤ ||f ||1p ||f ||∞
≤
c
1
(td/2 )1− p
, ∀p ∈ (1, ∞], ∀t ≥ 0.
Remark 12 We can generalize these methods for bounded domains Dirichlet boundary conditions. We do considering a stopping time τx (ω) as in the
previous chapter and the t < τx , t ≥ τx cases, so we obtain a boundary
term. We do not do that because is quite similar to the previous chapter
method.
3.2
The Fisher equation
We consider one dimensional Fisher equation7
1 ∂ 2 u(t, x)
∂u(t, x)
=
+ u(t, x)2 − u(t, x),
∂t
2 ∂x2
u(0, x) = f (x)
(3.3)
1
0.9
0.8
0.7
u
0.6
0.5
0.4
0.3
0.2
0.1
0
−20
−15
−10
−5
0
x
5
10
15
20
Figure 3.1: Traveling wave solution of (3.3).
To obtain our stochastic representation we consider a slightly different
process. The process is a branching brownian motion. Be a particle with
a brownian path and a exponential time8 , T . At this exponential time the
particle branchs in two identical particles, so each particle follows a brownian
path with branching. These particles are independent ones from the others.
Ww write x1 (t), ...xn (t) for the positions of the particles with
P (n = k) = e−t (1 − e−t )k−1 .
7
8
Another name for this equation is Kolmogorov-Petrovskii-Piskunov equation.
This is P (T > t) = e−t .
47
3.2. THE FISHER EQUATION
We want to show that, under certains assumptions in f , the solution of
(3.3) is
u(t, x) = Ex [f (x + x1 (t)), ..., f (x + xn (t))].
The equation is used in population dynamics, if the population considered presents movement (diffusion) and grow (birth).9 This is what the
process means, birth and diffusion.
We need an assumption in f to guarantee the existence of the expectation. Exactly we have
Theorem 23. Given f ∈ Cb2 (Rd ),1 ≥ f ≥ 0, we have that
u(t, x) = Ex [f (x + x1 (t)), ..., f (x + xn (t))]
verifies the equation (3.3).
Proof. We consider the T ≤ t or T > t cases. Thus we have
Z ∞
e−s ds = e−t
P (T > t) =
t
and then, if T > t,
u(t, x) = Ex [f (x + W0t (0))] = e−H0 t f (x).
Suppose that we are in the other case, T ∈ (s, s + ds), with s + ds < t.
The probability of this situation is e−s ds. The two new particles start their
movement at x + x1 (T ), and gives us two independent examples of the same
process but translated in space (x + x1 (T )) and time (t − s). As we have
independence the expectation of the product is the product of expectations
and we have u2 (t − s, x + x1 (t)). Taking the expectations in all position
x + x1 (t) = y we obtain the term
Z ∞
P (x + x1 (s) ∈ dy)u2 (t − s, y) = e−sH0 u2 (t − s, x).
−∞
And if we put all the previous calculations together we conclude
Z ∞
Z t
−H0 t
P (x+x1 (s) ∈ dy)u2 (t−s, y)
u(t, x) = P (T > t)e
f (x)+ P (T ∈ dy)
−∞
0
or equivalently
−t −H0 t
u(t, x) = e e
f (x) +
Z
0
t
e−s e−sH0 u2 (t − s, x)ds.
We have u(0, x) = f (x).
9
Actually the equation in the population dynamics has a reaction u − u2 , but from
(3.3) we recover the other with the change u 7→ 1 − u.
48
CHAPTER 3. PARABOLIC EQUATIONS
We change variables s′ = t − s obtaining
−t −H0 t
u(t, x) = e e
f (x) −
Z
t
′
′
es −t e(s −t)H0 u2 (s′ , x)ds′ .
0
Now taking derivatives in t we have
∂u(t, x)
= −e−t e−H0 t f (x) + e−t (−H0 )e−H0 t f (x) + u2 (t, x)−
∂t
Z
t
0
∂
s′ −t (s′ −t)H0 2 ′
e
e
u (s , x) ds′
∂t
We observe that the last term is the needed term to complete −u and
uxx . So u(t, x) verifies the equation (3.3) with the initial datum f . The
smoothness is not a problem because of the previous expression and the f
and the process considered regularity.
Previously we have representation formulas for linear equations. This
is the first non-linear equation. See [McK] for a more detailed study. The
idea of a particle with movement and branch can be used in other semilinear
parabolic (or elliptic) equations with a polinomic nonlinearity.
3.3
Feynman and quantum mechanics
In this section we study the Feynman path integral and its relation with the
objects defined in the previous chapters. We consider the onedimensional
case, but for a general case it is the same. We want to preserve the notation.
The notation is different of the notation in the previous chapters. We want
to preserve the Feynman calculations and ideas, so we do not care about the
rigour.10
We think that this section has an historical interest besides the academic
interest, so we have to conserve the original notations and calculations.
We have seen how to obtain a PDE solution thanks a functional integration. With this idea we study quantum mechanics. However, this beautiful
idea11 is more extent between phisicist than between mathematicians.
From a mathematician point of view it is not a completly succesful
method because of some problems with the measures in functions (see below). Feynman, in [Fe], says:
10
Feynman, in [FH], says ’The physicist cannot understand the mathematician’s
care in solving an idealized physical problem. The physicist knows the real problem is much more complicated. It has already been simplified by intuition, which
discards the unimportant and often approximates the remainder.
11
Beautiful at least in the humble author’s opinion.
3.3. FEYNMAN AND QUANTUM MECHANICS
49
The formulation given here suffers from a serious drawback.
The mathematical concepts needed are new. (...) One needs, in
adittion, an appropiate measure for the space of the argument
functions x(t) of the functionals.
We follow closely [Fe] (this is a review of the Feynman’s thesis [Fe2]).
We start with a ’Chapman-Kolmogorov’ equation (see chapter 1).
Let A, B, C be three measurements of the state of a certain physical
system such that the system is completly know. Let Pab be the probability
of, given A = a, having B = b. In a similar way we define Pbc . Then, if we
assume independence, we have
Pabc = Pab Pbc
and we expect the relation
Pac =
X
Pabc .
b
This is the gratest difference between classical mechanics and quantum
mechanics. In the classical formulation the previous equation is true, while
in the quantum formulation is not. The reason is that the intermediate
quantum state b is not well defined forever. We have to measure (and so the
system has an interference) to becomes true the previous equation.
What we have in the quantum case is that there exists complex numbers,
φij , such that
Pab = |φab |2 ,
Pbc = |φbc |2 ,
and we know the following relation12
φac =
X
Pac = |φac |2
φab φbc .
b
The physical meaning of this equation is that the probability of a particle
without relativistic effects and spinless goes from a to c can be calculate as
the square of some complex quantities, each one associated to a possible
path. We know by intuition the main result.
We know the least action principle, who says that the classical path is a
minimum of the action functional.
Z tb
A =
L(X ′ (t), X(t), t)dt
ta
where L is the lagrangian of the system. For example, for a particle with
mass m moving under the influence of a potential V (x) the lagrangian is
1
mX ′ (t)2 − V (X(t)).
2
12
These equations are the Chapman-Kolmogorov equation seen previously.
50
CHAPTER 3. PARABOLIC EQUATIONS
Remark 13 Rigorously, it is not needed that our path would be a minimizer, it is enough for it to be a critical point of the considered functional.
In this context we use the equations to solve a variational problem, in other
times the situation is the converse, we solve a variational problem to solve a
PDE problem (for example the Dirichlet problem for the Poisson equation).
If P (ta , xa , tb , xb ) is the probability of our particle13 moves from the point
xa at time ta = 0 to the point xb at time tb = T we have
P (b, a) = |K(ta , xa , tb , xb )|2
for a funtion K with
K(ta , xa , tb , xb ) =
X
φ(X(t))
all paths from (ta , xa ) to (tb , xb )
Remark 14 It is necessary remark that we do not specify the paths
we consider, but that the two extremal points are fixed suggests the space
(1.16). The physicist are not used to specify the space they consider.
The idea is that all path contribute, but in a different manner. Finally
φ(X(t)) = Cei/~A (X(t))
where C is a constant to normalize.
Phisically this means that our particle, for example a photon, travels for
’all’ possible paths between two points, but in different phases. This is the
de Broglie wave-particle duality.
Before we continue we are going to think how recover the classical mechanics doing the Planck constant 14 goes to zero. We obtain in a natural
way the scale in that the quantum mechanics is a good model. These limits
are know as ’semiclassical limits’, because they are not classical (~ 6= 0) but
the system behave as it would be classical.
If we do a small perturbation in the classical scale the contribution of the
action is small in the classical scale, but it is not in the Planck constant scale,
where the changes are big. Then our angle15 oscillates in such a way that
the total contribution is zero. If we consider a path, X1 , who is not a critical
point of the functional, there exist another path, X2 , close to the former and
such that the contribution of X2 is the opposite of the contribution of X1 .
So we only have to take account of the path in a neighbourhood of X, where
X is a critical point of the action. And in the classical limit (~ → 0) the
unique important path is the critical point of the functional.
13
We consider the pinned Wiener measure, so the two boundary points are fixed. This
is not a problem (see chapter 1), is essentially the same as the Wiener measure used in
the previous sections.
14
The Planck constant is asocciated to the quantization.
15
We have a complex exponential.
3.3. FEYNMAN AND QUANTUM MECHANICS
51
To define the path integral we consider a sequence of times16 , ti = ta +εi,
i = 0, 1, ...N and the position of the particle at these times, Xi = X(ti ).
Then
Z
K(ta , xa , tb , xb ) ≈ C φ(X1 , ...XN −1 )dX1 dX2 ...dXN −1 .
We need to take the limit and a constant to normalizen, and this is a problem. However, in the case of a particle moving under a potential V the
constant is (see [FH])
−N
C=A
=
2πi~ε
m
−N/2
.
So we have (in this case the limit exists (see [FH]))
Z
dX1 dXN −1
1
...
ei/~A (X1 ,...XN−1)
K(ta , xa , tb , xb ) = lim
ε→0 A
A
A
(3.4)
where A (X1 , ...XN −1 ) is the integral over the path who takes values Xi
at times ti and linear between them.17 This definition of a path can be a
problem although we do not take the limit, because in the points with X ′ (t)
discontinuous (the positions Xi ) the second derivative is not finite, this is
that the aceleration is not finite. Feynman knows that this can be a problem, but he also says that he can ’solve’ it with the substitution of X ′′ (t)
with the finite diferences ε12 (Xi+1 − 2Xi + Xi−1 ). Feynman is not worried
about these problems and says
Nevertheless, the concept of the sum over all paths, (...), is
independent of a special definition and valid in spite of the failure
of such definitions.
And so he writes the path integral, understood as the limit when N → ∞
in the previous equation,18
Z
K(ta , xa , tb , xb ) = ei/~A (X(t)) DX(t).
(3.5)
In [Fe] we can see how, with formal calculations, Feynman ’show’ that
K defined as above verifies the Schrödinger equation
i~
16
∂ϕ(t, x)
~2 ∂ 2 ϕ(t, x)
=−
+ V (x)ϕ(x, t) = Hϕ(t, x).
∂t
2m ∂x2
This is the way we define the cylinders in the chapter 1.
Taking limits we obtain a path nowhere differentiable, exactly the same as in the
brownian case.
18
All path integrals are understood as a limit process in N .
17
52
CHAPTER 3. PARABOLIC EQUATIONS
It is a well-known fact that, if f is a given initial value, the solution can be
written as,
Z
K(0, x, t + s, y)f (y)dy
ϕ(t + s, x) =
ZR Z
K(0, x, s, z)K(s, z, t + s, y)f (y)dzdy
=
ZR R
K(0, x, s, z)ϕ(s, z)dz.
=
R
This equation gives us a N times iteration. If we consider ti = ta + iε we
obtain the formula (3.4).
Moreover we can write
K(ta , xa , tb , xb ) = e−(i(tb −ta )/~)H (xa , xb ).
Previously we wrote
p(t − s, x, y) =
Z
Cxy [s,t]
dWxy
(3.6)
for the heat kernel, where the initial point x and the final point y are fixed.
This remind the previous calculation (3.5).19 We are going to show the
relation with the integral with respect to the Wiener measure.
We consider the equation
1 ∂ 2 ρ(t, x)
∂ρ(t, x)
=
− V (x)ρ(t, x).
∂t
2 ∂x2
With V = 0 (the heat equation case) we have, if f is a given initial value20
Z
W (0, x, t + s, y)f (y)dy
ρ(t + s, x) =
ZR Z
W (0, x, s, z)W (s, z, t + s, y)f (y)dzdy
=
ZR R
W (0, x, s, z)ρ(s, z)dz.
=
R
If we iterate N times, with times ti = ta + iε we obtain the formula
Z
N
PN
Y
2
ρ(tb , xb ) = C(ε) e(−1/2ε) l=0 (Xl+1 −Xl ) ρ(ta , xa )
dxl .
l=1
If we compare the two formulas we expect that, as the calculation is valid
for all N , in the limit N → ∞ we have
Z
R tb ′2
(3.7)
W (ta , xa , tb , xb ) = N1 e−1/2 ta X (t)dt DX(t)
19
20
In this context the kernel is called ’propagator ’.
We change the notation for the kernel to conserve the Feynman notation.
3.3. FEYNMAN AND QUANTUM MECHANICS
53
where N1 is a constant to normalize.
Recall that
W (ta , xa , tb , xb ) =
Z
x
Cxab ([ta ,tb ],R)
dWxxab .
In the free particle (with mass m) case we have
Z
R tb
′2
K(ta , xa , tb , xb ) = N2 ei/~ ta 1/2·X (t)dt DX(t).
(3.8)
We observe the similarities between the kernels (3.7) and (3.8). However there are big differences between them. The integral in (3.7) is the
Wiener integral, and so completly rigourous. The integral in (3.8) is not
rigurous. A problem is the measure considered, who is finitely additive
(the Feynman measure or a Wiener measure with complex diffusion constant are finitely additives (see [Kl] and references therein). Other problem
is that we write X ′ (t), but the paths considered are in the space (1.16),
and so they are related to the brownian bridge, thus they are nowhere difRas X are brownian paths, we understand (see [Kl])
Rferentiables.
RHowever,
dW
′
X (s)dt = dt dt = dW . Thus this terms are Itô stochastic integrals.
If we consider a non-zero potential the Feynman-Kac formula (see section
1 of this chapter) gives us
Z
R tb
R tb
xb
−V (X(t))dt
t
a
e ta −V (X(t))dt dWxxab
W (ta , xa , tb , xb ) = Exa [e
]= x
Cxab ([ta ,tb ],R
and in the Feynman’s notation this is (formally at least)
Z R
tb
′2
W (ta , xa , tb , xb ) = N3 e ta −1/2·X (t)−V (X(t))dt DX(t).
(3.9)
(3.10)
Again we have similarities between them, but only in the Wiener case
there are rigorous integrals.
We have seen that the path integral gives us a kernel. Actually if
1
H = − ∆ + V (x)
2
then
W (ta , xa , tb , xb ) = e−(tb −ta )H (xa , xb ).
We are going to see why in the Wiener case the path integral is valid.
Taking limits in N we have that the integral in the path space (the limit
of the product of measures in the space is the measure in the path space) is
infinite. Indeed
Z
Z Y
N
dX(ti ) = ∞.
DX(t) = lim
ε→0
i
54
CHAPTER 3. PARABOLIC EQUATIONS
So the exponential in (3.7) has to vanish. In other way the integral will
not be well defined. This happens when the path is nowhere differentiable,
as in the brownian bridge (or in the brownian motion) case. This is the
process who induce the measure considered (see section 1 of this chapter).
There are attempts to rigorize this integrals. For example, Itô consider
a regularization term and pass to the limit to vanish it. Indeed he writes
Z
R tb ′′2
R tb 1
1
′2
′2
lim N (ν) ei/~ ta [ 2 mX (t)−V (X(t))]dt e− 2ν ta [X (t)+X (t)]dt DX(t).
ν→∞
The idea is that the paths are smoother than before because the second
derivative (and not the first derivative) is infinite. See [Kl] for other ways
to improve the path integral.
Feynman writes his thesis (1942) considering space-time paths. Some
years after (1951) he considers phase space paths. To do that we recall the
relation between the hamiltonian and the lagrangian of a system:
L(X ′ (t), X(t)) = p(t)q ′ (t) − H(p(t), q(t)).
Fixed ta and tb , we consider the phase space paths q(t) who, at these times,
are in the fixed positions qa and qb . We consider the phase space paths p(t)
with ta ≤ t ≤ tb but now the initial and final positions are free. p, q are
brownian paths. Thus, if we follow the previous ideas we expect21
Z
R tb
′
K(ta , qa , tb , qb ) = M e(i/~) ta p(s)q (s)−H(p(s),q(s))ds Dp(t)Dq(t). (3.11)
We recall that as p, q are brownian paths, they are
R ′nowhere differentiable.
However as we
R said before, we can understand the q (s)dt terms as stochastic integrals dW .
The great advantage of the pase space path integrals is that we can apply
it for relativistic particles. So we have a way to define the quantum fields.
For example, if we consider a free relativistic particle, with hamiltonian
in units such that the speed of light is 1
p
H(p(t), q(t)) = p2 (t) + m2
we have the kernel
R(ta , qa , tb , qb ) = M
Z
e(i/~)
R tb
ta
p(s)q ′ (s)−
√
p2 (s)+m2 dt
DpDq.
Remark 15 This is a non-local operator. Recall that
px = −i~
21
∂
∂x
As previously, these are formal calculations (see [Kl] for the rigorization).
3.3. FEYNMAN AND QUANTUM MECHANICS
55
thus
∂2
.
∂x2
Before we conclude we have to make Rsome comments. Above we said
that we can understand certain terms ( q ′ (s)dt) as stochastic integrals,
however we do not say in which sense, Itô or Stratonovich. We consider
now the Stratonovich sense, because the relativity impose certain change of
coordinates. This is the reason of why the Schrödinger equation is not a
relativistic equation. We need an equation with same order in all variables,
and the Schrödinger equation does not hold this condition. So if we can
apply the usual chain rule we have an advantage.
We can rigorize the previous calculations if we consider the regularization
(see [Kl])
Z
√2
R tb
R ′2
1
′
′2
2
lim Mν e(i/~) ta p(s)q (s)− p (s)+m dt e− 2ν p (s)+q (s)dt DpDq
p2x = −~
ν→0
Remark 16 |ϕ(t, x)|2 is the probability of find our particle in the point
x at time t, but |ϕ(p, q)| is the probability of being in a certain state (p, q).
So the interpretation is harder.
Chapter 4
Representation formulas in
fluid dynamics
We consider the homogeneus, isotherm, isotropic and incompressible NavierStokes equations.
∂~u
u · ∇)~u + ∇p − ν∆~u = 0
∂t + (~
(4.1)
[N S]
∇ · ~u = 0
The spatial domain is Td , so we have periodic boundary conditions. The
initial data is f~(x) ∈ C k+1,α , with k ≥ 1.
If we do not have viscosity, i.e. if ν = 0, we obtain the Euler equations
∂~u
u · ∇)~u + ∇p = 0
∂t + (~
(4.2)
[Euler]
∇ · ~u = 0
Figure 4.1: Navier-Stokes solution at time 10.
We will obtain a probabilistic representation for the Navier-Stokes equations. Using this representation we show the local existence (in time) of
classical solution for the Navier-Stokes equations. However, we start with
the Burgers equation.
56
4.1. THE 1-DIMENSIONAL BURGERS EQUATION
57
Figure 4.2: Stokes problem solution.
4.1
The 1-dimensional Burgers equation
We start with the Cauchy problem for the inviscid Burgers equation, but
the representation is for the viscid one.1 So, given an initial value f ∈ Cb2 ,
we consider the equations
vt + vvx = 0
(4.3)
ν
vxx
(4.4)
2
The idea is to use the Hopf-Cole transformation and the representation for
the heat equation.
vt + vvx =
Lemma 4 (Hopf-Cole). Let u(t, x) be a classical solution of the heat equation
with viscosity ν/2, then
v = −ν(log u)x
(4.5)
is a classical solution of the viscid Burgers equation (4.4).
Proof. Calculating the derivatives of (4.5) we obtain
vt = −ν
uxt u − ut ux
,
u2
2 uxx
ux
uxx u − u2x
uxx v 2
vx = −ν
=
−ν
−
+ ,
=
−ν
u2
u
u
u
ν
uxxx u − ux uxx 2vvx
+
,
u2
ν
and using that u(t, x) is a solution of the heat equation,
vxx = −2
uxt u − ux ut
ν
vxx = −ν
+ vvx = vt + vvx .
2
u2
Thus v(t, x) solves the viscid Burgers equation.
1
Previously we said that (example 1) if we do not have diffusion the measure in the
continuous functions is singular (in the sense that it is supported in an unique function).
58
CHAPTER 4. FLUID DYNAMICS
1.5
1
0.5
0
−0.5
−1
−1.5
0
1
2
3
4
5
6
7
Figure 4.3: Inviscid Burgers equation at different times.
Proposition 5. Let v(t, x) be the solution of the viscid Burgers equation
(4.4) with initial data f ∈ Cb2 . Then the following representation formula
holds
Z √ν(W t (x))
0
−1
v(t, x) = −ν log Ex exp(
f (s)ds)
.
(4.6)
ν −∞
x
Proof. We have that
−1
u(t, x) = exp(
ν
Z
x
v(t, s)ds)
−∞
solves the heat equation
ν
uxx
2
Z
−1 x
u0 (x) = exp(
f (s)ds).
ν −∞
ut =
with initial data
We know (chapter 3) that u(t, x) has the following representation
Z √ν(W t (x))
0
√
−1
t
u(t, x) = Ex [u0 ( ν(W0 (x)))] = Ex exp(
f (s)ds) .
ν −∞
We use the formula (4.5) to conclude
v(t, x) = −ν log Ex
−1
exp(
ν
Z
√
ν(W0t (x))
−∞
f (s)ds)
.
x
(4.7)
4.2. THE D−DIMENSIONAL BURGERS EQUATIONS
59
1
0.8
0.6
0.4
0.2
0
−0.2
−0.4
−0.6
−0.8
−1
−4
−3
−2
−1
0
1
2
3
4
Figure 4.4: Burgers equation with different dissipation rates.
4.2
The d−dimensional Burgers equations
We consider now the d−dimensional Burgers equations (now ~v is a vector)
~vt + (~v · ∇)~v = 0
(4.8)
~vt + (~v · ∇)~v = ν∆~v .
(4.9)
and the viscous case
With initial data f~ ∈ Cb2 .
First, we suppose that we have a potential solution, i.e. ~v (t, x) = ∇H(t, x).
This hypothesis, that phisically implies a irrotational flow, gives us the HopfCole transformation.
We have to suppose that
f~(x) = ∇H0 (x).
The Hopf-Cole transformation is
∇H(t, x) = ~v (t, x) = −ν
∇u(t, x)
= −ν∇ log(u(t, x))
u(t, x)
And we have that if u(t, x) is a solution of
ut =
ν
∆u
2
with initial data
u0 (x) = exp
H0 (x)
−ν
(4.10)
60
CHAPTER 4. FLUID DYNAMICS
then ~v (t, x) is a solution of
~vt =
ν
ν
1
∆~v − (~v · ∇)~v = ∆~v − ∇|~v |2
2
2
2
f (x) = ∇H0 (x).
Remark 17 The last equality shows us the two possible generalizations
of the inertial term, (~v · ∇) and 21 ∇||~v ||22 , of the Burgers equation are the
same in the gradient function case. In other cases is not true.
Now we use the representation of the heat equation (see chapter 3)
√
√
1
t
t
u(t, x) = Ex u0 ( ν(W0 (x)))] = Ex exp(− H0 ( ν(W0 (x))))] .
ν
We conclude using the formula (4.10)
√
1
~v (t, x) = −ν∇ log E exp(− H0 ( ν(Bt + x)))] .
ν
(4.11)
Now we consider the general case (a non-potential flow). So the initial
value is not necessarily a gradient, f~(x) 6= ∇H0 (x). But we suppose that
our initial data is smoother. We consider, as the spatial domain, the set Td .
This domain is bounded but we do not need stopping times becuase of the
periodic boundary conditions.
The idea is to start with a inviscid equation and consider particles trajectories with white noise. Finally we consider the noise
√
~.
2νdW
With generator
√
( 2ν)2
∆ = ν∆.
2
Then, taking expectations, we obtain the solution to the viscid equation.
Remark 18 We can understand the Itô diffusions as ’random characteristic curves’.
We consider the inviscid Burgers equation (4.8). We do not have pressure
nor dissipative terms and so the velocity can be transported by the flow.
~ a) is the fluid mapping,2~v (t, X(t, a)) is constant in time, and so
If X(t,
we have ~v (t, X(t, a)) = f (a). Summarizing, ’we go back to labels and we see
there the initial velocity’. This is the method of characteristics. Thus the
system
~ ′ (t, a) = ~v (t, X(t,
~ a)), ~v (t, X(t, a)) = f (a)
X
~
with initial data X(0,
a) = a is equivalent to the Burgers equation (4.8)
before the formation of shocks.
~ a) is, if we fix t = s, an homeomorphism between the occupied volumes
The flow X(t,
at time 0 and s, and, if we fix a, is the trajectory of this particle (see Figure 4.5).
2
4.2. THE D−DIMENSIONAL BURGERS EQUATIONS
61
Figure 4.5: Flow.
Considering a as points in the initial volume U0 , while x are points in
the time t volume, Ut , we write
~ x) = X(t,
~ a)−1 = a, de modo que ~v (t, x) = f (A(t,
~ x)).
A(t,
Both volumes U0 and Ut are the torus Td , but we need to discriminate
the variables. The transformations between the volumes are X(t, a) = x
and A(t, x) = a.
~ a)−1 = A(t,
~ x) = a as the spatial inverse.
Remark 19 We write X(t,
The spatial inverse exists because of
Z t
~ = ∇ · ~v det ∇X
~ ⇒ det ∇X
~ = exp( ∇ · ~v ds) > 0.
∂t det ∇X
0
The idea is to disturb the ODE
~ ′ = ~v
X
with the previous white noise. We obtain the SDE
√
~.
~ = ~v dt + 2νdW
dX
We consider ν = 1/2 without loss of generality. The main theorem is
Theorem 24 (Burgers equation). Let f~ ∈ C k+1,α , k ≥ 1 be a divergence-free
~ a)
~ t (0) a d-dimensional Wiener process. Let ~v (t, x), X(t,
vector field, and W
0
62
CHAPTER 4. FLUID DYNAMICS
be solution of the stochastic system
~
~
dX
= ~v dt + dW
~ x) = (X(t,
~ a))−1
A(t,
~ x))]
~v
= Ex [f~(A(t,
(4.12)
~
with initial data X(0,
a) = a and periodic boundary conditions for ~v (t, x)
~
and X(t, a) − I. Then ~v (t, x) is a classical solution of (4.9) (with ν = 1/2),
with f~ as initial value.
Proof. We define
~ 0t (0))
~v ω = ~v (t, x + W
~ ω as the solution of
and Y
~ ω )′ (t, a) = ~v ω (t, Y~ ω ), Y
~ ω (0, a) = a.
(Y
Let
~ ω (t, x)
B
~ ω.
be the spatial inverse of Y
We define
~ 0t (0)) = f~(B
~ ω (t, x − W
~ 0t (0))) = f~(A(t,
~ x)).
w(t,
~ x−W
where the las equality is a consequence of
~ B
~ ω (t, x − W
~ 0t (0))) = Y
~ ω (t, B
~ ω (t, x − W
~ 0t (0))) + W
~ 0t (0) = x;
X(t,
thus
~ x) = B
~ ω (t, x − W
~ t (0)).
A(t,
0
We apply the generalized Itô formula (theorem A.8) to
~ t (0))ω = f~(A(t,
~ x))
w(t,
~ x−W
0
obtaining
~ t (0)) − f~(x) =
w
~ ω (t, x − W
0
Z
t
t
−
Z
0
0
~ s (0))
w
~ ω (ds, x − W
0
~ s (0))dW
~
∇w
~ ω (s, x − W
0
Z
1 t
~ s (0))ds
+
∆w
~ ω (s, x − W
0
2 0
Z t
j
ω
s
j
t
~
~
∂j w
~ (ds, x − W0 (0)), x − W0 (0) .
+
0
Taking expectations in the previous equation the left hand side resulting
is
4.2. THE D−DIMENSIONAL BURGERS EQUATIONS
63
~v (t, x) − Ex [f~(x)].
The right hand side is
Z t
Z t
1
ω
s
ω
s
~
~
Ex
w
~ (ds, x − W0 (0)) + Ex
∆w
~ (s, x − W0 (0))ds
2 0
0
because the stochastic integration term vanishes after take the expectation,
and the quadratic variation term is zero because of w(t,
~ x) ∈ C 1 in time
(the differential is given by the transport equation) and w(t,
~ x) ∈ C k+1,α (as
~
smooth as f ) in space, and so the function and the derivatives to the k−th
order are of bounded variation.
We have that
Z t
Z
1 t
1
ω
s
~
∆w
~ (s, x − W0 (0))ds =
∆~v (s, x)ds.
Ex
2 0
2 0
We need the convective term. But
~ ω (t, x))
w
~ ω (t, x) = f (B
solves
w
~ tω (t, x) + (~v ω (t, x) · ∇)w
~ ω (t, x) = 0.
with the initial data f~. The term we obtain is
Z t
Z t
ω
s
ω
s
~
~
Ex
w
~ (ds, x − W0 (0)) = Ex
w
~ t (s, x − W0 (0))
0
0
Z t
~ 0t (0)) · ∇)
Ex [−(~v ω (s, x − W
=
0
ω
~ t (0))]
× w
~ (s, x − W
0
Z t
= − (~v (s, x) · ∇)~v (t, x)ds.
0
We put all these calculations together and take the time derivative.
We suppose we have a solution to the stochastic system, and then we
have the representation formula. Conversely, if ~v (t, x) is a classical solution
of the equation (4.9) the stochastic system has a solution (see [CI],[Iy] and
[Iy2]). The idea is, if ~v is a classical solution, the equation
~ = ~v dt + dW
~
dX
~ Moreover,
has solution, and so there exists A.
~ x))]
~v (t, x) = Ex [f~(A(t,
64
CHAPTER 4. FLUID DYNAMICS
because of a result (about stochastic partial differential equations) contained
in [CI] and from the uniqueness of classical solutions for the viscid Burgers
equations. If we show that our stochastic system has a solution, then we
havea local in time solution to the equation (4.9). It is a local solution
because we use a fixed point method to show the existence of solution to
the system (see [Iy]).
Remark 20 The boundary conditions are the natural ones because of
~
~
X(0, a+L~ej ) = a+L~ej = X(0,
a)+L~ej . And this is the periodicity condition
~ a) − I.
for X(t,
Remark 21 We use the notation in [CI] to remark the similarities
between this section and the following one.
4.3
The incompressible Navier-Stokes equations
In this section we show a representation formula for the classical solution
of the Navier-Stokes equations (4.1) as a stochastic system and a functional
integral. If we have a classical solution of (4.1) then the solution verifies
the probabilistic representation. And conversely, if we have a solution of
the stochastic system then the ~u(t, x) is a classical solution of Navier-Stokes
equations. We use this fact to show the existence (local in time) of classical
solution for the Navier-Stokes.
We need some preliminary results.3
The fields ~v ∈ (L2 )d ∩ (C ∞ )d can be written in a orthogonal way as a
divergence-free field and a gradient field. So we define
Definition 8. We write P for the Leray-Hodge projector, i.e. this operator,
given a field, returns the divergence-free component.
P : (L2 )d ∩ (C ∞ )d 7→ S
where S denotes the set of the divergence-free fields (the solenoidal fields).
Proposition 6 (Eulerian-Lagrangian formulation). Let k ≥ 0 and f~(x) ∈
~
C k+1,α such that ∇· f(x)
= 0. Then ~u(t, x) satisfies the incompressible Euler
equations (4.2) with an initial datum f~(x) if and only if the pair of functions
~ a) satisfy the stochastic system
~u(t, x), X(t,
~ ′ (t, a) = ~u(t, x)
X
~ x) = X
~ −1 (t, a)
A(t,
~ t (t, x)f~(A(t,
~ x))]
~u(t, x) = P[(∇A)
(4.13)
(4.14)
(4.15)
~
with initial data X(0,
a) = a.
3
We write P the Laray-Hodge projector in the divergence-free fields. Recall that P
denotes the probability.
4.3. THE INCOMPRESSIBLE NAVIER-STOKES EQUATIONS
65
See [C] for the proof.
Lemma 5. Let ~u(t, x) be a velocity field. The commutator [∂t + (~u · ∇), ∇]
is
[∂t + (~u · ∇), ∇] = −(∇~u(t, x))t ∇
Proof.
[∂t + (~u · ∇), ∇]f~(t, x) = (∂t + (~u · ∇))∇f~ − ∇(∂t + (~u · ∇))f~(t, x)
= (~u · ∇)∇f~(t, x) − (~u · ∇)∇f~(t, x) − (∇~u(t, x))t ∇f~(t, x)
~ a) and
Lemma 6. Given a Lipschitz, divergence-free field ~u(t, x), and X(t,
~ x) functions defined as
A(t,
~ ′ (t, a) = ~u(t, x)
X
~ x) = X
~ −1 (t, a)
A(t,
~
X(0,
a) = a
We define ~v (t, x) as the solution of the evolution equation
(∂t + (~u · ∇))~v (t, x) = ~z(t, x)
for certain field ~z, with initial datum ~v0 . Then if we define w(t,
~ x) as
~ t (t, x)~v (t, x)]
w(t,
~ x) = P[(∇A)
we have that w(t,
~ x) is the solution of
~ t )~z (t, x)
(∂t + (~u(t, x) · ∇))w(t,
~ x) + (∇~u(t, x))t w(t,
~ x) + ∇p(t, x) = ((∇A)
∇ · w(t,
~ x) = 0
w(0,
~ x) = P~v0 (x)
See [CI] for the proof. In the proof we use the particular case ~v (t, x) =
~
~
f (A(t, x)) and ~z = 0.
Theorem 25 (Navier-Stokes equations). Let f~ ∈ C k+1,α,k ≥ 1 be a given
~ t (0) a d-dimensional Wiener process. Let the pair
solenoidal field, and W
0
~ a) a solution to the stochastic system
~u(t, x), X(t,
~ = ~udt + dW
~
dX
−1
~ = X
~
A
~ t f~(A)]]
~
~u = Ex [P[(∇A)
(4.16)
(4.17)
(4.18)
~
~ − I are periodic in the
with initial datum X(a,
0) = a and such that ~u and X
boundary. Then ~u satisfies the incompressible Navier-Stokes equation with
f~ as initial data.
66
CHAPTER 4. FLUID DYNAMICS
Proof. We define uω as
~ t (0)).
~uω (t, x) = ~u(t, x + W
0
~ ω be the solution of
Let Y
~ ′ (t, a) = ~uω (t, Y
~ ω (t, a)),
Y
~ ω (0, a) = a.
Y
~ ω (t, x) be the spatial inverse of Y
~ ω . We observe that
Let B
~ B
~ ω (t, x − W
~ 0t (0))) = Y
~ ω (t, B
~ ω (t, x − W
~ 0t (0))) + W
~ 0t (0) = x
X(t,
so
~ x) = B
~ ω (t, x − W
~ 0t (0)).
A(t,
If we write θx h(y) = h(y − x) for the traslation then
A = θW
~ t (0) B.
0
We define wω as
~ ω )t (f~(B
~ ω (t, x)))].
wω (t, x) = P[(∇B
~ x))
Applying the lemma 6 in the particular case ~z = 0 and ~v (0, x) = f~(A(t,
we have
(∂t + (~uω (t, x) · ∇))w
~ ω (t, x) + (∇~uω (t, x))t w
~ ω (t, x) + ∇q ω (t, x) = 0
ω
∇·w
~ (t, x)
= 0
w(0,
~ x)
= Pf~(x).
(4.19)
Using the definition of ~u,
~ t f~(A(t,
~ x))]]
~u = Ex [P[(∇A)
~ t f~(θ t
= Ex [P[(∇θ t B)
W0 (0)
~
W0 (0) B(t, x))]]
~ t f~(B(t,
~ x)))]]
= Ex [P[θW0t (0) ((∇B)
~ t f~(B(t,
~ x))])]
= Ex [θW0t (0) (P[(∇B)
= Ex [θW0t (0) wω ].
The hypothesis f~ ∈ C k+1,α and the existence of solution to the stochastic system (4.18) theorem (see the next section for the proof) gives us the
regularity needed to apply the generalized Itô formula (theorem A.8). Recall that w
~ ω is the F in the theorem A.8. We need C 2 in space and C 1 in
time. By the lemma 6 we have the C 1 in time condition, and the existence
to the system (4.18) theorem gives us the regularity in space. In addition,
~ t (0) is a martingale doing the g(t) in the theorem.
x−W
0
4.3. THE INCOMPRESSIBLE NAVIER-STOKES EQUATIONS
67
~ t (0)),
Applying the generalized Itô formula (theorem A.8) to w
~ ω (t, x − W
0
Z t
~ s (0))
~ t (0)) − f~(x) =
w
~ ω (ds, x − W
w
~ ω (t, x − W
0
0
0
Z t
~ s (0))dW
~
∇w
~ ω (s, x − W
−
0
0
Z
1 t
~ s (0))ds
+
∆w
~ ω (s, x − W
0
2 0
Z t
j
ω
s
j
t
~ (0)), x − W
~ (0) .
∂j w
~ (ds, x − W
+
0
0
0
w
~ω
Because of the regularity of
we have that the quadratic variation term
vanishes. Moreover after taking the expectation Ex the stochastic integration term also disappear. Thus we obtain
Z t
Z t
ω
s
~ (0)) + 1
∆~u(s, x)ds.
~u(t, x) − f~(x) = Ex
w
~ (ds, x − W
0
2 0
0
The term with w
~ ω is
Z t
Z t
ω
s
ω
s
~
Ex
w
~ (ds, x − W0 (0)) = Ex
w
~ t (s, x − W0 (0))ds
0
0
Z t
~ s (0))
= −Ex
(~u(s, x) · ∇)w
~ ω (s, x − W
0
0
~ s (0))
+ (∇~u(s, x))t w
~ ω (s, x − W
0
~ 0s (0))ds
+ ∇q ω (s, x − W
Z t
= − [(u · ∇)u + ∇pds]
0
where
1 2
ω
|u| + Ex [θW
~ t (0) q ].
0
2
Finally, the incompressibility holds because of the incompressibility of w
~ω
~ ω.
and ~u = Ex θW
~ t (0) w
p=
0
Remark 22 In the previous proof we used the existence theorem to
(4.18) (theorem 26 in the following section).
In [Iy] we can see the result for a substance transported by the fluid.
This is
Proposition 7. (Transport) Let ~u ∈ C 1 be a velocity field of a fluid and
Θ(t, x) a classical solution of
Θt (t, x) + (~u(t, x) · ∇)Θ(t, x) − ν∆Θ(t, x) = 0,
Θ(0, x) = f (x).
68
CHAPTER 4. FLUID DYNAMICS
Then
~ x))]
Θ(t, x) = Ex [f (A(t,
where A is as in the theorem 25.
There is a result for the vorticity.
~ (t, x) be the vorticity of a fluid, then
Proposition 8 (Vorticity). Let V
~ (t, x) = Ex [(∇X
~V
~0 )(A(t,
~ x))].
V
If d = 2, then
~ (t, x) = Ex [V
~0 (A(t,
~ x))].
V
Remark 23 We obtain these formulas taking the expectation in the
Euler case formulas.
Remark 24 We can use the proposition 8 to obtain a second version
of the theorem 25 using the Bioy-Savart law.
Remark 25 We can do the samePfor a operator with anisotropic diffu2
sion, i.e. a more general operator as di,j=1 ai,j (x) ∂x∂j ∂xi .
4.4
Proof of local existence for Navier-Stokes
In this sectio we show the local existence for the stochastic system and so,
the local existence of classical solution of Navier-Stokes equations (4.1).
Theorem 26 (Existence for the stochastic system). Let f~ ∈ C k+1,α, k ≥ 1,
a divergence-free field. Then there is a time T = T (L, ||f~||C k+1,α , k, α) but independent of the viscosity ν, and a pair of functions ~u, λ ∈ C([0, T ], C k+1,α )
~ = I + λ satisfies the system (4.18). In addition, there exist
such that ~u, X
Λ such that ||~u(t)||C k+1,α ≤ Λ.
Remark 26 The norm in C k,α is defined as
X
X
Lα |D m ~u(x, t) − D m~u(y, t)|
.
||~u||k,α =
L|m| sup |D m ~u| +
Lk sup
|x − y|α
x∈Ω
x,y∈Ω
|m|≤k
|m|=k
For the norm in C([0, T ], C k,α ) we take the supremum
||~u||C([0,T ],C k,α ) = sup ||~u||k,α .
0≤t≤T
We need some definitios and some bounds. See [Iy] for the proof of these
bounds.
~ : C k,α × C k+1,α 7→ C k,α as
Definition 9. We define the Weber operator W
~ (~v , ~l) = P[(I + (∇~l)t )~v ].
W
4.4. PROOF OF LOCAL EXISTENCE FOR NAVIER-STOKES
69
Proposition 9. If k ≥ 1, and ~l1 , ~l2 , ~v1 , ~v2 ∈ C k,α are functions such that
||∇~li ||k−1,α ≤ C
then
~ (~v1 , ~l1 ) − W
~ (~v2 , ~l2 )||k,α ≤ c(||~v2 ||k,α ||∇~l1 − ∇~l2 ||k−1,α + ||~v1 − ~v2 ||k,α ).
||W
~ (v, l) ∈ C k,α and we have
Lemma 7. Let k ≥ 1 and ~v , ~l ∈ C k,α. Then W
~ (v, l||k,α ≤ c(1 + ||∇l||k−1,α )||v||k,α .
||W
~ a) be a solution of the system
Lemma 8. Let ~u ∈ (C[0, T ], C k+1,α ) and X(t,
~ − I, l = A
~ − I and Λ = supt (||~u||k+1,α ). Then
(4.18), and consider λ = X
there exists c = c(k, α, Λ) such that the following bounds hold
||∇λ||k,α ≤
cΛt
exp(cΛt/L),
L
cΛt
||∇~l||k,α ≤
exp(cΛt/L).
L
Lemma 9. Let ~u1 , ~u2 ∈ C([0, T ], C k+1,α ) such that supt ||~ui ||k+1,α ≤ Λ and
~ 1, X
~ 2, A
~1, A
~ 2 be functions defined as in (4.18). Then there exists a time
let X
T = T (k, α, Λ) and a constant c = c(k, α, Λ) such that the following bounds
hold
Z t
~
~
||~u1 − ~u2 ||k,α
||X1 − X2 ||k,α ≤ c exp(cΛt/L)
0
~1 − A
~ 2 ||k,α ≤ c exp(cΛt/L)
||A
Z
t
0
||~u1 − ~u2 ||k,α
for all t ∈ [0, T ]
Proof (of the theorem). Consider a time T (we will take T small, see below)
and a number Λ (we will take Λ big). We consider the spaces
k+1,α
~
U = ~u ∈ C([0, T ], C
), ∇ · ~u = 0, ~u(0, x) = f (x), ||~u||k+1,α ≤ Λ
and
1
k+1,α
~
~
), ||∇l||k,α ≤
L = l ∈ C([0, T ], C
∀t ∈ [0, T ]l(0, x) = 0 .
2
~ u , λu =
If ~u in U the system (4.18) has a solution. Thus we can define X
~
~
~
Xu − I and lu = Au − I.
We consider the operator W : U →
7 U
~ (f~(A
~ u ), ~lu ).
W (~u) = Ex W
70
CHAPTER 4. FLUID DYNAMICS
We want to show that W is Lipschitz with respect to the norm
||~u||U = sup ||~u||k,α
t
and then, if we take T small enough W will be a contraction and by Banach’s
fixed point theorem it will have a fixed point.
The previous results gives us that, if we take Λ proportional to
k+2
3
,
||f~||k+1,α
2
||W (~u1 ) − W (~u2 )||k,α
cΛ
exp(cΛt/L)
≤
L
Z
t
0
||~u1 − ~u2 ||k,α
and so if T = T (k, α, L, Λ) is small, the operator W is a contraction. Applying the Banach’s fixed point theorem and taking into account that U is
closed we conclude that the sequence given by ~un+1 = W (~un ) converges to
a function ~u in the norm C k,α. ~u is a fixed point of the operator W and so a
solution of (4.18). ~un is a strongly convergent in the k norm, and converges
weakly in the k + 1 norm, so the limits are the same. So ~u is a C k+1,α
function and we have the bound ||~u(t)|| ≤ Λ because of ~u ∈ U.
As we have a solution of (4.18), applying the theorem 25 a local in time
classical solution of (4.1).
Chapter 5
Differential games and
equations
We have studied in the previous chapters how the paths of certain Markov
processes are related (they are characteristic curves) with elliptic and parabolic
equations. We showed (chapter 3, section 2) that for some semilinear equations we can find a representation formula if we take more difficult Markov
process. In this chapter we study the relationship between certain differential games (we call them tug of war and they are (most of times) Markov
process) and certain equations, the 1-laplacian, the p−laplacian and the
∞−laplacian. We prove that the games’ positions are the characteristic
curves for this operators.
In this chapter we follow closely [PSSW],[Ob],[Ev3],[BEJ], [ACJ], [KS1]
and [KS2]
5.1
The operators
Definition 10. The operator
d
X
uxi uxj
ux ,x
∆∞ u =
|∇u|2 i j
i,j=1
is the ∞-laplacian.
The operator
∆p u = ∇ · (∇u|∇u|p−2 )
is the p-laplacian.
The operator
∆1 u = ∇ ·
is the 1-laplacian.
71
∇u
|∇u|
72
CHAPTER 5. DIFFERENTIAL GAMES AND EQUATIONS
We remark that ∆1 is an orthogonal to the gradient diffusion. Indeed,
if we consider the d = 2 case we have that the non-divergence form of the
operator is given by the matrix
A=

1 
|∇u|
u2x1
|∇u|2
ux 1 ux 2
|∇u|2
1−
ux 1 ux 2
|∇u|2
u2x2
1 − |∇u|
2

.
At each point x we consider the basis given by ∇u and ∇u⊥ . If we calculate
A∇u we see that it is 0, and so there is no diffusion in this direction.
Conversely the ∞−laplacian is a diffusion only in the gradient direction.
The matrix now is


A=
u2x1
|∇u|2
ux 1 ux 2
|∇u|2
ux 1 ux 2
|∇u|2
u2x2
|∇u|2
.
We have that A∇u⊥ = 0, and so there is diffusion only in the gradient
direction.
The previous calculus are only informal because if ∇u = 0 they do not
make sense. However they gives us some intuition.
Remark 27 The operators are used in computer vision, to conserve the
contours (∆1 ) or to difuminate (∆∞ ).
0.75
0.7
0.65
0.6
0.55
0.5
0.5
0.5
0
0
−0.5
−0.5
Figure 5.1: An ∞−harmonic function.
73
5.2. THE GAMES
The geometrical interpretation of u satisfying ∆1 u = 0 is that the level
curves of u has mean curvatures 0. The variational interpretation is that u
is the minimum of
Z
|∇u|
J(u) =
U
with given boundary data.
For the p−laplacian the variational interpretation is that u is the minimum of
Z
|∇u|p
J(u) =
U
with given boundary data.
The variational interpretation of ∆∞ u = 0 in U with Lipschitz boundary values is that u satisfy the condition
LipV (u) = Lip∂V (u) ∀V ⊂ U
where U, V are domains and LipV (u) indicates the Lipschitz constant in the
domain of the function u, V . This kind of functions are called absolutely
minimizing Lipschitz extension.
During this chapter we consider U a boundad domain, a continuous
function g defined in ∂U and we study the following equations
Lu = 0 in U,
u = g in ∂U
where L is a differential operators before.
5.2
The games
All games considered are two players and zero-sum games, so a player pays
to the another player. We suppose that the player 2 pays player 1 the
appropiated quantity (if negative he earns). In all games the players move a
token in the considered domain, and when the token hits the boundary the
game ends and the player 2 pays. Each game is determined by the positions
of the token at each turn xk . Both players have strategies, but it seems
reasonable that the ’good’ strategies will be markovian. The p-laplacian
game and the 1−laplacian game are studied briefly. We are interested in the
∞−laplacian. We suppose the domain U is as regular as we need to take
the needed limits.
5.2.1
’Tug of war’
We are going to study this game in full detail. Let U ⊂ Rd , x0 ∈ U be as
before and g be a Lipschitz function supported on the boundary of U . Each
player choose a vector ~aik ∈ B0 (ε) (k is the turn and i is the player). Then
a fair coin is thrown to decide the player who can move the token. If the
74
CHAPTER 5. DIFFERENTIAL GAMES AND EQUATIONS
player 1 wins the coin then the new position will be xk+1 = xk + ~a1k . We
define the history hk = (x0 ,~a0 , x1 ,~a1 ...) where ~ak is the vector for the player
who wins the k-th turn. Let Hk be the space of all possible until the k−th
turn and let H∞ be the space fo all possible histories. We observe that the
space Hk is a product space. We can understand the payoff function as a
function
g : H∞ 7→ Rd .
Usually the vector ~aik depends on the position xk , i.e, the process will be
markovian, however we consider a more general dependency. We consider
that the vector depends on the full previous history.
We define a strategy Ski (i = 1, 2 is for the players) as a function Ski :
Hk 7→ B0 (ε). The function gives the following movement for the player i.
We define Si = {Ski }k . Then the initial point x0 and the strategies for both
players define a probability in H∞ (use the Kolmogorov’s extension theorem
to prove it).
If we write xτ ∈ ∂U for the point where the game ends and the given
strategies S1 , S2 we define the expected payoffs for both players as
Vx0 ,i (S1 , S2 ) = ExS01 ,S2 [F (xτ )]
if the games ends almost surely. If the game does not end almost surely
Vx0 ,1 (S1 , S2 ) = −∞ and Vx0 ,i (S1 , S2 ) = ∞.1
We define the game’s value (discrete) for player 1 as
uε1 = sup inf Vx0 ,1 (S1 , S2 ).
S1 S2
Roughly speaking, inf S2 Vx0 ,1 (S1 , S2 ) is the minimum quantity that player 1
wins if we suppose that player 2 plays optimally. So, the supremum is that
player 1 maximizes the money.
For the player 2 the definition is
uε2 = inf sup Vx0 ,2 (S1 , S2 ).
S2 S1
Roughly speaking supS1 Vx0 ,2 (S1 , S2 ) is the maximum quantity that player
1 obligates to player 2 to pay. And the infimum is that player 2 minimizes
this quantity if player 1 plays optimally. Thus, player 1 maximizes his worst
case and player 2 minimizes his worst case.
We have uε1 (x) ≤ uε2 . If these quantityes are the same then the game
(discrete) has a value. We write uε for the value. We expect that taking
limit ε → 0 our value (for the discrete game) converges, in a certain sense,
to u solution of
∆∞ u = 0,
1
As in the chapter 2, τ is a stopping time.
u|∂U = g.
(5.1)
75
5.2. THE GAMES
This equation has a dynamic programming principle useful for the numerical schemes (see appendix C and [Ob]). This result is, in a certain sense,
the mean value property for the infinite harmonic functions.
Lemma 10. We consider the tug of war game without running payoff (f =
0). Then the (discrete) game’s value function, u(x) = uε1 (x), satisfies
1
u(x) = ( sup u(y) + inf u(y))
2 y∈Bx (ε)
y∈Bx (ε)
(5.2)
for all x ∈ U . If the game does not end then u(x) = −∞. The same for
v(x) = uε2 (x), if we consider that if the game does not end then v(x) = ∞.
Proof. To prove it we have to take into account the possible coin results.
Remark 28 We want to remark the similarities with the chapter 2.
However in the second chapter the process are pure diffusions, without
strategies. In the chapter 2 we had functional integral and now we integrate in the histories.
Remark 29 We expect that the good strategies will be markovian but
we do not restrict to the markovian ones.
5.2.2
Approximations by SDE to ∆∞
In [BEJ] we can read that the stochastic differential equation
~
~
~ = η(t)dt
~ , X0 = x0
dX
+ ζ(t)d
W
is related with the ∆∞ . Let U ⊂ Rd , ε > 0, g and x0 ∈ U be as before. The
~ The total payoff function (final
player 1 choose ~
η , and player 2 choose ζ.
and running payoff) for player 1 is
g(X0τx (x))
−
Z
0
τx
2
~ + ε |~η (s)|2 ds
~η (s) · ζ(s)
4
where τx is the ’hit the boundary’ stopping time.
This stochastic game has a (discrete) value that converges to u solution
of (5.1).
In [PSSW] we can see another SDE giving the infinity-laplacian. Let
u ∈ C 2 be a infinity-harmonic function. We define
~ t (x)) = |∇u(Y
~ t (x))|−1 ∇u(Y
~ t (x))
~r(Y
0
0
0
and
~ t (x))|−2 D 2 u(Y
~ t (x))∇u(Y
~ t (x)) − P
~ t (x)) = |∇u(Y
~s(Y
0
0
0
0
~ t (x)) (and so orthogonal to the gradient).
where P is its projection in ∇u(Y
0
76
CHAPTER 5. DIFFERENTIAL GAMES AND EQUATIONS
We define the SDE
~ 0t (x) = ~r(X
~ 0t (x))dW + ~s(X
~ 0 (x))dt,
dX
X0 = x.
~ t (x)) we obtain
Applying the Itô formula to u(X
0
Z t
Z t
1
s
t
~ s (x))dW.
~
~
∇ut · ~r(X
∆∞ u(X0 (x))ds +
u(X0 (x)) − u(x) =
0
2
0
0
Taking expectations we conclude
~ t (x))] = u(x).
Ex [u(X
0
5.2.3
Existence of game’s value for the ’Tug of war’
In this section we proof the existence of a game’s value for the tug of war
game (discrete) without running cost.
Theorem 27. Let U ⊂ Rd be a domain and 1 >> ε > 0 a fixed number.
We consider the tug of war game with f = 0 and g bounded below (or above)
in ∂U a Lipschitz function. Then uε1 = uε2 and the game has a value.
Proof. If g is bounded above and not below we consider −g and we change
the players. So we can restrict us to the g bounded below case.
We have to see that uε2 ≤ uε1 . Recall that player 1 is able to finish the
game almost surely, because player 1 can obtain a number as big as player
1 need of heads (or tails). Thus uε1 ≥ inf x∈∂U g(x).
Let x0 , x1 , ... be the game’s positions at different turns. We write uε =
uε1 . We consider the oscillation
δ(x) = sup |uε (y) − uε (x)|.
y∈Bx (ε)
We define the set
X0 = {x ∈ U : δ(x) ≥ δ(x0 )} ∪ ∂U
and the index jn = maxj≤n xj ∈ X0 . This index gives us the las turn in the
set X0 . Let vn = xjn be the last position in X0 . X0 is the set of points with
oscillation bigger than the initial point..
Thanks to the dynamic programming principle we have
2uε (xn ) =
sup
y∈Bxn (ε)
uε (y)+
inf
y∈Bxn (ε)
uε (y) ⇔
inf
y∈Bxn (ε)
{uε (xn )−uε (y)} = δ(xn ),
and so if the players choose the strategy of maximize (player 1) or minimize
(player 2) the function uε , the δ function will not decrease because it has, at
least, the same oscillation of the previous position. Thus, with the previous
strategies, the game is always in X0 .
77
5.2. THE GAMES
We consider the following strategy for player 2: if vn 6= xn , i.e. we are
not in X0 , player 2 moves to y such that y is the point of minimum distance
between xn and X0 . When xn = vn player 2 choose the new position in such
a way uε is minimum. For player 1 we consider all the possible strategies
and let the game begin. We remark that player 2 does not play in an optimal
way, because X0 contains the boundary, where the game ends and player 2
has to pay. For player 2 it is better that the game does not end, because in
this case his payoff is uε2 = ∞. We mention that this strategy is Markovian.
Let d be the distance measured in ε−steps, then we define dn = d(xn , vn )
the distance where we consider that we have to pass through the previous
positions and
mn = uε (vn ) + δ(x0 )dn .
We have
uε (xn ) = uε (vn ) + (uε (xjn +1 ) − uε (vn )) + (uε (xjn +2 ) − uε (xjn +1 )...
(5.3)
+ ...(uε (xn ) − uε (xn−1 )
n
X
ε
≤ u (vn ) +
δ(xk )
(5.4)
≤ mn
(5.6)
(5.5)
k=jn +1
(because they are not in X0 .)
mn is a supermartingale. Indeed, we suppose that xn ∈ X0 and player 1
moves. There are two possibilities, xn+1 is or is not in X0 .
If xn+1 ∈ X0 then
mn+1 = uε (vn+1 ) − uε (xn ) + uε (xn ) ≤ uε (xn ) + δ(xn ) = mn + δ(xn ).
If xn+1 ∈
/ X0 then
mn+1 = uε (vn+1 ) + δ(x0 )dn+1
≤ uε (xn ) + δ(xn )
≤ mn + δ(xn ).
Suppossing now that xn ∈ X0 and player two moves we have
uε (xn+1 ) = uε (xn ) − δ(xn ) = uε (vn ) − δ(xn )
and in the case xn+1 ∈ X0 the previous equation is
mn+1 = mn − δ(xn ) ≤ mn − δ(x0 ).
If xn+1 ∈
/ X0 we have a contradiction, because of
δ(xn ) ≥ δ(x0 ) > δ(xn+1 )
78
CHAPTER 5. DIFFERENTIAL GAMES AND EQUATIONS
and this is not possible because of the dynamic programming principle. This
principle gives us that the oscillation in xn+1 (if we choose it maximizing or
minimizing uε ) is not decreasing.
If we suppose that xn ∈
/ X0 and player 2 moves we have, using the
previously defined strategy and if vn+1 6= xn+1 , the following inequality
mn+1 = uε (vn+1 ) + δ(x0 )dn+1
≤ uε (vn+1 ) + δ(x0 )d(vn+1 , xn ) − δ(x0 )d(xn+1 , xn )
≤ mn − δ(x0 )
Si vn+1 = xn+1 then
mn+1 = uε (xn+1 )±uε (xn ) ≤ uε (xn )+δ(xn ) ≤ mn +δ(x0 ) (using (5.6) and X0 ).
We consider the last case: xn ∈
/ X0 and player 1 plays.
If player 1 enters in X0 then
mn+1 = uε (un+1 )±uε (xn ) ≤ uε (xn )+δ(xn ) ≤ mn +δ(x0 ) (using (5.6) and X0 ).
If player 1 does not enter in X0 then
mn+1 = uε (vn+1 ) + δ(x0 )d(vn+1 , xn+1 )
≤ uε (vn ) + δ(x0 )d(vn , xn ) + δ(x0 )d(xn , xn+1 )
≤ mn + δ(x0 ).
Putting together all the previous calculations we have that if player 2 moves
mn+1 ≤ mn − δ(x0 )
and if player 1 moves
mn+1 ≤ mn + δ(x0 ).
Thus
1
(5.7)
E[mn+1 |m0 , m1 ...mn ] ≤ mn + (δ(x0 ) − δ(x0 )) = mn .
2
Using the martingale convergence theorem, if τx0 is the previous stopping time if the game starts at x0 , we know that there exists the limit
limn→∞ mmin(n,τx0 ) . This and that mn+1 ≤ mn − δ(x0 ) implies that the
game ends almost surely.
Then the expected payoff with this strategy for player 2 is
Ex0 [uε (xτx0 )] = Ex0 [ lim uε (xmin(τx0 ,n) )]
n→∞
≤ Ex0 [mmin(τx0 ,n) )] (using (5.6) and Fatou)
≤ m0 = uε (x0 ) (supermartingale)
79
5.2. THE GAMES
This is better than uε2 and so
uε2 ≤ uε1 .
The oscillation can be zero. In this case the strategy for player 2 is
advance straightformward to the boundary until arrive to some point, x′0 ,
with non-zero oscillation (but uε (x0 ) = uε (x′0 ). Then the strategy becomes
the previous one.
We need a convergence to the continuous game’s value theorem.
Theorem 28. Let g be a bounded below function. Then the continuous
game’s value, u, exists and the following holds
||u − uε ||∞ → 0
if ε → 0. In addition u is continuous.
And this game’s value is a solution of (5.1).
Theorem 29. Let U ⊂ Rd be a bounded domain. Let g be a Lipschitz
function supported on the boundary. Then u, the continuous game’s value,
is the unique viscosity solution of (5.1).
5.2.4
’Tug of war with noise’
We consider now the ’Tug of war with noise’ game. In this case the operator
is the p-laplacian. Let U ⊂ Rd ,x0 ∈ U and g be as before (f is now identically
0). We
measure, µ, uniform in the sphere with radious
p consider a probability
−1
r = (d − 1)q/p (where p + q −1 = 1) in the orthogonal to ~e1 hyperplane.
We define µ~v (S) = µ(Ψ−1 (S)) where Ψ(~v ) = ~e1 .2 At each turn k a fair
coin is thrown. This coin gives the turn to each player. The player having
the turn choose ~vk , with norm least or equal to ε. The new position is
xk = xk−1 + ~vk + ~zk , where ~zk is a random vector with respect to µ~vk . If we
are at distance to the boundary least or equal to (1+r)ε the player having the
turn has to move to the boundary to a point xk with |xk − xk−1 | ≤ (1 + r)ε,
and the game ends.
We define uε1 (x) and uε2 (x) as the minimum expected player’s payoffs if
the game starts at x0 = x. If they are the same then the game has a value.
We suppose that the game (discrete) has a value, then the limit at each
point u(x) = limε→0 uε1 (x) is the function indicating the minimum expected
value for the payoffs if the game (continuous) start at x0 = x.
We have the following result: the function u(x) verifies
∆p u = 0,
u|∂U = g
(5.8)
Remark 30 Show that the game (discrete) has a value, take the limit
and see what operators we obtain are results. See [KS1], [KS2] and [PS] for
the proof.
2
See [PS] for more details about this probability measure.
80
CHAPTER 5. DIFFERENTIAL GAMES AND EQUATIONS
Figure 5.2: The posible positions for the ’Tug of war with noise’ game.
5.2.5
Spencer game
We start with the ’Spencer game’. In this case the operator is the 1-laplacian.
Given U a domain, x0 ∈ U the point where we start the game, g a continuous
function defined in the boundary of U , this function will be our final payoff,
and f : U 7→ R the running payoff (at each movement the player who has
the turn pays). So, if g = 0 and f is a positive constant, c, the payoffs at
the turn k are f (xk ) = c and in the case that the token hits the boundary,
g(xk ) = 0. In this case player 1 want to maximize the number of steps to hit
the boundary. On the other hand player 2 want to arrive to the boundary
as soon as possible to pay the minimum quantity. Each turn k player 2
choose a vector, ~vk , with a fixed norm ε. Player 1 choose a direction to this
vector, σk ∈ {1, −1}. At each turn the new position is xk = xk−1 + σk~vk .
We define uε1 (x) and uε2 (x) the minimum expected values of the player’s
payoffs if the game starts at x0 = x.3 When they are the same quantity we
say that the game has a value, u(x0 ). We can understand u as the money
the players pay to the casino to enter the game. It is the expected payoff,
so player 1 want to maximize and player 2 want to minimize this value.
To obtain the differential operator we have to take the limit ε → 0.
3
This definition is not rigorous. See the ∞−laplacian case for a rigorous one.
81
5.2. THE GAMES
We suppose the game (discrete) has a value. Then the limit at each
point u(x) = limε→0 uε1 (x) is the function indicating the minimum payoff
each player hope if the game (continuous) starts at x0 = x.
We suppose that player 2 choose the gradient of u direction, thinking in
minimize u all it would be possible. Then the player 1 take positive sign
and in this new position the player 2 pays more. So we expect that a (good)
player 2 choose only the orthogonal to the gradient of u direction.
We have the following result: the function u(x) verifies
1
− ∆1 u = f,
2
5.2.6
u|∂U = g
(5.9)
Other games
In [KS1] and [KS2] the authors study games for non-linear parabolic or
elliptic equations. For example we explain the game leading to the backward
heat equation.
We consider the backward Cauchy problem
ut (t, x) + uxx (t, x) = 0,
u(T, x) = f (x).
As before, there are two players, a initial point x0 and a fixed ε. We fix
0 < t < T . Player 1 choose a number α.
√ Then, knowing the player 1’s α,
player 2 choose b = ±1. Plyer 1 pays 2εαb. Then the time, that when
2
we started was
√ t, is t + ε , and the token’s position, originally at x0 , is at
xk = xk−1 + 2εb. The game continues until the final time T . Then player
1 has a payoff f (x(T )), where x(T ) is the final token’s position. Player 1
wants to final payoff minus the running cost will be maximum. Player 2
wants to minimize this quantity.
The player 1’s value function, uε1 (t, x), is defined as the maximum of the
final payoff minus the running cost if the game has an initial time t and
a initial position x0 = x. This function converges to the solution of the
backwar Cauchy problem for the heat equation.
This games have a dynamic programming principle (see [KS1]). The
proof (see [KS1]) use the dynamic programming principle and the Taylor
formula. In these papers, the authors study the relationship between the
economy and these games.
Chapter 6
Numerical experiments
In the introduction we talked about the applications of these representation
formulas to the numerical methods. But first we have to simulate the paths
of a given SDE. To do this we apply an explicit Euler method. So, if we
have the 1D diffusion given by
dY = b(Y )dt + σ(Y )dW
and a initial value Y0 . Our method do
Y (n + 1) = b(Y (n))T /N + σ(Y (N ))(W (t(n + 1)) − W (t(n)).
Now we can apply a Monte-Carlo method to the elliptic or parabolic
equations:
Let U be the square centered at the origin with side 2. Our boundary
value will be g(x) = x1 . The idea is to simulate a big number of brownian
paths started at the same point x, look for the boundary points hitted and
take the mean of g evaluated at these points. This will be our u(x).
In the figure 6.1 we can see the result of an experiment with 100 points
in the grid, a time step of 1/9 and 100 diffusions per point. In my PC (1.73
GHz) the time has been 32 seconds.
For the parabolic equations (we consider the Cauchy problem) we can
do the same. Fix t, then for a given point x we consider a big number the
diffusions started at x and we see the value of f at the final point (at time
t) of the diffusions. Then we take the mean and this will be our numerical
approximation.
For example if we consider the heat equation with initial value a characteristic of a given set we have the results (with the same number of points
in the grid and same time step) shown in the figures 6.2 and 6.3.
We expect that the error will decay if we take a greater number of diffusion, a greater number of points in the grid and a smaller time step, however
this is not the case always as we will see.
82
83
Solucion
1.5
1
0.5
0
−0.5
−1
−1.5
1
1
0.5
0.5
0
0
−0.5
−0.5
−1
−1
Figure 6.1: The numerical solution.
In [IN] we can read how use the Monte-Carlo method to approximate the
solution of the Navier-Stokes and Burgers equations. The idea is to replace
the flow X by M copies of it, each driven by an independent Wiener process,
and replace the expected value by the mean. This method gives us a second
system, the numerical one, which we call Monte-Carlo system. This method
is valid for the Navier-Stokes equations in 2D (we know the existence to
Euler equations in 2D), however is not valid for the onedimensional Burgers
equation. The reason is that the Monte-Carlo system (the numerical one)
is dissipative only for short time. When the Monte-Carlo system stops
dissipating energy, then the nonlinearity forces the Monte-Carlo system to
shock. The really amazing result is that if we solve the Monte-Carlo system
for short time, then replace the initial data with the solution in this small
time, and restart the procedure this way we obtain the real solution, without
unreal shocks!. The stochastic system (see chapter 4) is markovian, but the
Monte-Carlo system is not. If we reset often enough then the dissipation is
strong enough to do not give us shocks. Thus we conclude that the mean is
less dissipative that the expectation.
There is research in this field now, for example see the Denis Talay’s
research [GKMPPT].
See the appendix C for the Matlab code.
84
CHAPTER 6. NUMERICAL EXPERIMENTS
Valor inicial
1
0.8
0.6
0.4
0.2
0
12
10
12
8
10
6
8
6
4
4
2
2
0
0
Figure 6.2: Initial value.
Evolucion
1
0.8
0.6
0.4
1
2
0.2
3
4
0
5
0
6
2
7
4
8
6
9
8
10
10
12
11
Figure 6.3: Numerical solution at time 4.
Chapter 7
Conclusion
Throughout the whole text we have signaled how Markov processes relate
to differential equations, starting with the result of Kakutani for harmonic
functions and concluding with an operator, the ∞−laplacian, for which the
Markov process considered is much more complicated. The markovian paths
are like the characteristic curves for the secon order equations. Or, citing
Kohn and Serfaty (see [KS1])
’the first and second-order cases are actually quite similar.’
These probabilistic methods offer a new intuition useful in many problems (Feynman-Kac, the Kolmogorov-Petrovski-Piskounov equation, NavierStokes equations...). Moreover it gives us a new way to obtain well-known
results (like the mean value property, Harnack inequality, existence of classical local solution to the 3D Navier-Stokes equations). We can also generalize
this technique to non-local operator (see [A], [NT]), which are the generators
of more general Markov processes, the Lévy processes.1 Other equations susceptible to apply these methods are the wave equation, the beam equation...
(see [DMT]).
From the point of view of the numerical analysis these methods are useful
in problems with a very difficult geometry, or high dimensions. In these cases
a Monte-Carlo method does not have any drawback. We can also use it to
divide the domain in subdomains to apply another method (like the finite
elements method...).
These ideas are useful in many applications (silhouette recognition, fluid
mechanics, quantum mechanics...) because, for example, they gives us a
third formulation of quantum mechanics, the Feynman’s formulation. This
formulation has the serious advantage of be easily generalized to quantum
fields.
These methods are well adapted to Dirichlet boundary conditions. However, the Neumann boundary conditions can also be studied with these meth1
These processes are like ’brownian motions and jumps’, and these jumps are the reason
for which they are non-local.
85
86
CHAPTER 7. CONCLUSION
ods if we consider Itô diffusions reflected in the boundary (see [F], [R]).
Appendix A
Some useful results
A.1
A construction for the brownian motion
As we can see in the Itô formula (see below), to define the SDE we need the
brownian motion’s increments. Tipically the increments are related to the
concept of derivative. The brownian motion is not smooth, but we want to
conserve this idea. We start with the white noise, formally dW
dt . We define
the brownian motion with a time integral. The white noise is in L2 (0, 1),
where we choose this time interval to fix ideas, but we can do in general.
So, if ψn ara a orthonormal basis of L2 (0, 1) we can write
dW (ω, t) =
∞
X
An (ω)ψn (t).
n=0
Using the brownian motion properties and the fact that, formally,
derivative of W , we expect that
dW
dt
is the
An ∼ N (0, 1)
and independent random variables.
We are going to see this more carefully. We have
An (ω) =
Z
1
dW (ω, t)ψn (t)dt.
0
If we suppose that they are normals with mean 0 and variance 1 then we
have
Z 1
ψn ψm dt = 0.
0 = E[An ]E[Am ] = E[An Am ] =
0
A similar conditions holds for the variance,
E[A2n ] = 1.
87
88
APPENDIX A. SOME USEFUL RESULTS
We define the brownian motion as
Z t
Z t
∞
X
ψn (s)ds.
An (ω)
dW (ω, s)ds =
W (ω, t) =
0
n=0
0
We do not choose a basis yet. The previous formula is correct with all
basis.
Let hn be the n−th function for the Haar basis,
i.e. h0 = 11(0,1) ,
h1 = 11(0,1/2) − 11(1/2,1)
hn = 2k 1((n−2k )/2k ,(n−2k +1/2)/2k − 2k 1((n−2k +1/2)/2k ,(n−2k +1)/2k
where k is in a such way that 2k ≤ n < 2k+1 .
In this basis all is simpler, because of
Z t
hn (s)ds = sn (t)
0
where sn is the n−th Schauder function (also a basis).
We show in this way the result
Theorem 30. Let An ∼ N (0, 1) independents random variables. Then
W (ω, t) =
∞
X
Ak (ω)sk (t)
k=0
converges uniformly in t almost everywhere in ω. Moreover W defined in
such a way is a brownian motion.
See [Ev] for the proof. It is a calculation with characteristic functions to
see that the increments are as we expect.
When we have the onedimensional brownian motion in (0,1) we extend
the definition to higher dimensions putting together some onedimensional
brownia motions. TO extend to longer times we do the same.
A.2
The Kolmogorov’s regularity theorem
Theorem 31 (Kolmogorov). Let X be a stochastic process with continuous
paths almost everywhere such that
E[|X(t) − X(s)|β ] ≤ C(t − s)1+α , ∀t, s ≥ 0
then for all 0 < γ <
α
β
and T > 0 there exists K(ω) in such a way
|X(t) − X(s)| ≤ K|t − s|γ .
A.2. THE KOLMOGOROV’S REGULARITY THEOREM
89
Proof. We take T = 1 without loss of generality. We take γ in the considered
interval. We define for n ≥ 1
i
+
1
i
1
n
An = X
− X n ≥ nγ for some integer i < 2 .
2n
2
2
Es decir, los conjuntos de sucesos tales que, en la partición que consideramos,
tengan incrementos grandes. La idea ahora es acotarlos y aplicar BorelCantelli.
n −1
2X
i 1
i+1
≥
−
X
P X
P (An ) ≤
2n
2n 2nγ
i=0
n −1
β 2X
i+1
1 −β
i E X
≤
−X n 2n
2
2nγ
i=0
n −1 2X
1 1+α 1 −β
≤ C
2n
2nγ
i=0
P
Thus, if we sum over n we have that the series ∞
n=1 (An ) is a convergent
one and using Borel-Cantelli’s lemma we conclude that almost everywhere
ω there exists m such that n ≥ m
X i + 1 − X i ≤ 1 .
n
n
2
2 2nγ
But, taking a constant K(ω) we can have the same for all n. We have to
choose the constant considering the first m terms.
X i + 1 − X i ≤ K , ∀n ≥ 0
(A.1)
2n
2n 2nγ
We have to see that the previous equation gives us what we want.
We fix ω ∈ Ω such that the previous equation holds. Let t1 and t2 be
two dyadic rational numbers1 such that 0 < t2 − t1 < 1. Let n be an integer
in such a way that
2−n ≤ t2 − t1 ≤ 2−n+1 .
We can write because they are dyadic,
t1 =
1
1
1
i
− p1 − p2 ... − p , n < p1 < ... < pk
n
2
2
2
2k
1
1
j
+ q1 + q2 ... +
n
2
2
2
for certains i, j with
i
t1 ≤ n ≤
2
t2 =
1
1
, n < q1 < ... < qk
2qk
j
≤ t2 .
2n
The rational numbers with a power of two as denominator.
90
APPENDIX A. SOME USEFUL RESULTS
Then, recalling what condition holds for the diference t2 − t1 we have that
1
j−i
≤ t2 − t1 < n−1 .
2n
2
Concluding that j = i or j = i + 1.
We can use the equation (A.1) with the previously fixed ω and we obtain
j − i γ
i
j
X
− X n ≤ K n ≤ K(t2 − t1 )γ
2n
2
2
γ
X i − 1 − 1 ... − 1 − X i − 1 ... − 1
≤ K 1 .
2pr 2n 2p1
2p2
2pr
2n 2p1
2pr−1 we can bound, we have to sum, to substract and to apply the triangle inequality,
k
X
1
X t1 − X i ≤ K
(A.2)
n
p
2
2 rγ
r=1
We have pr > n so
1 1
1 1
1
= n pr −n ≤ n r .
p
r
2
2 2
2 2
In the previous calculation we used the condition n < p1 ... < pr . In addition
we can sum all terms in the series obtaining
K
k
∞
X
1
K X 1
C
≤
≤
.
p
γ
nγ
rγ
2r
2
2
2nγ
r=1
r=1
In the las inequality we used that the series converges, because from r onwards the exponent is rγ > 1. Using the properties of n we conclude that
C
≤ C(t2 − t1 )γ .
2nγ
In a similar way we obtain a bound for
|X(t2 ) − X(j/2n )|.
Now
|X(t1 ) − X(t2 )| = |X(t1 ) − X(i/2n ) + X(i/2n ) − X(j/2n ) + X(j/2n ) − X(t2 )|
≤ C1 (ω)|t2 − t1 |γ
for all dyadic rational numbers in [0,1]. We know that the process has
continuous paths ans so we conclude the same for all t ∈ [0, 1].
91
A.3. THE ITÔ FORMULA
A.3
The Itô formula
We consider the operator
Au =
d
d
X
∂u
1 X
∂2u
bi (x)
ai,j (x)
+
2
∂xi ∂xj
∂xi
i=1
i,j=1
where
ai,j = (σσ t )i,j .
We have the following well-known theorem.
Theorem 32 (Itô formula). Let
~ = ~b(X(t),
~
~
dX
t)dt + σdW
i.e.,
~ t)dt +
dX i = bi (X,
X
~ t)dW j
σ i,j (X,
j
with bi ∈ L1 ([0, T ]) and σ i,j ∈ L2 ([0, T ]),2 1 ≤ i, j ≤ d. Let
u : Rd × [0, T ] 7→ R
be a given continuous function with two spatial derivatives and one derivative
in time. Then Y (t) = u(X 1 , ..., X d , t) have the differential
d
d
X
∂u
1 X ∂2u
∂u
i
(X(t), t)dt +
(X(t), t)dX +
(X(t), t)dX i dX j
dY =
∂t
∂xi
2
∂xi ∂xj
i=1
i,j=1
(A.3)
where the term with dX i have the following ’rules’
dt2 = 0, dtdW j = 0, dW i dW j = δi,j dt.
We can write it in the integral formulation
~
~
u(X(t),
t) − u(X(0),
0) =
Z t
0
Z t
∂u
~.
∇u · σdW
+ Au ds +
∂t
0
We are going to prove the onedimensional version.
2
These spaces contain the function with
–
»Z T
|f |p < ∞
E
0
and p = 1, 2.
(A.4)
92
APPENDIX A. SOME USEFUL RESULTS
Theorem 33 (Itô formula (1D)). Let
dX = b(X(t), t)dt + σ(X(t), t)dW
with b ∈ L1 ([0, T ]) and σ ∈ L2 ([0, T ]) and let
u : R × [0, T ] 7→ R
be a given continuous function with
∂u ∂u ∂ 2 u
,
,
∂t ∂x ∂x2
continuous. Then Y (t) = u(X(t), t) has the differential
dY
∂u
1 ∂2u
∂u
(X(t), t)dt +
(X(t), t)dX +
(X(t), t)σ 2 dt
2
∂t
∂x
2
∂x
∂u
∂u
1 ∂2u
2
=
(X(t), t) +
(X(t), t)b(X(t), t) +
(X(t),
t)σ
dt
∂t
∂x
2 ∂x2
∂u
(X(t), t)σ(X(t))dW.
+
∂x
=
Proof. We start with the case u = xm . We claim that
1
d(X m )(t) = mX(t)m−1 dX + m(m − 1)X m−2 (t)σ 2 dt.
2
The m = 0, 1 cases are trivial. We suppose that it becames tru for m − 1
and we are going to show it for the m case. We will use the following lemma
(see [Ev] for the proof):
Lemma 11 (Product rule). If we have
dX1 (t) = b1 (X(t)1 , t)dt + σ1 (X(t)1 , t)dW
dX2 (t) = b2 (X(t)2 , t)dt + σ2 (X2 (t), t)dW
with bi ∈ L1 ([0, T ]) and σi ∈ L2 (0, T ). Then the derivative of the product is
d(X1 X2 )(t) = X2 (t)dX1 (t) + X1 (t)dX2 (t) + σ1 (X(t)1 , t)σ2 (X(t)2 , t)dt.
We write
d(X m−1 X)
using the lemma and we conclude that the Itô formula holds for this kind of
functions. Because of the linearity we conclude that the Itô formula holds
for all the polynomials in x. We consider the product of a polynomial in x
and another in t, i.e. u(t, x) = f (x)g(t). Then
d(f (X(t))g(t)) = f (X(t))g′ dt + gdf (X(t))
1
= f (X(t))g′ dt + g[f ′ (X(t))dX + f ′′ (X(t))σ 2 dt].
2
93
A.3. THE ITÔ FORMULA
This is the expression we expect. We conclude that the formula holds for all
u(x, t) =
m
X
fi (x)gi (t).
(A.5)
i=1
We use a density argument. Let u be a function as we state above, then
there exists un as in (A.5) approximating uniformly in compacts in R×[0, T ]
to u and to the mentioned derivatives of u. We conclude the proof taking
the limits.
We have a relationship between the (1.10) and the elliptic operator
Au =
d
d
X
∂u
1 X
∂2u
bi (x)
ai,j (x)
+
2
∂xi ∂xj
∂xi
i=1
i,j=1
where
ai,j = (σσ t )i,j .
Another useful version is
Theorem 34 (Itô formula (stopping times)). Let
~ = ~b(X(t),
~
~
dX
t)dt + σdW
i.e.,
~ t)dt +
dX i = bi (X,
X
~ t)dW j
σ i,j (X,
j
with bi ∈ L1 ([0, T ]) and σ i,j ∈ L2 ([0, T ]), 1 ≤ i, j ≤ d. Let
u : Rd × [0, T ] 7→ R
a given continuous function with two spatial derivatives and one time-derivative.
Given τ a stopping time. Then Y (t) = u(X 1 , ..., X d , t) has the derivative
dY =
d
d
X
∂u
1 X ∂2u
∂u
(X(τ ), τ )dt+
(X(τ ), τ )dX i +
(X(τ ), τ )dX i dX j
∂t
∂xi
2
∂xi ∂xj
i=1
i,j=1
(A.6)
where the terms with dX i has the following ’rules’
dt2 = 0, dtdW j = 0, dW i dW j = δi,j dt.
We can also write in the integral formulation
Z τ
Z τ
∂u
~
~
~.
u(X(τ ), τ ) − u(X(0), 0) =
∇u · σdW
+ Au ds +
∂t
0
0
(A.7)
94
APPENDIX A. SOME USEFUL RESULTS
The ’rules’ in the main result can be shown using the joint quadratic
variation.
Definition 11 (Quadratic variation). Let X(t) be a stochastic process defined for a < t < b and let P = {a ≤ t0 ..., tn ≤ b} be a partition of this
interval. We define the quadratic variation related to the given partition as
< X(t) >P =
k−1
X
(X(ti+1 ) − X(ti ))2 + (X(t) − X(tk ))2
i=0
where tk is in a such way that tk < t < tk+1 . If when we refine the partition
there exists a limit < X(t) > (in probability) and this limit is independent
of the considered partitions then we call this limit the quadratic variation of
X(t).
If we have a bounded variation and continuous process then its quadratic
variation vanishes. So, the quadratic variation of a smooth function vanishes.
Definition 12 (Joint quadratic variation). Let M (t), X(t) be two stochastic
processes defined for a < t < b and let P = {a ≤ t0 ..., tn ≤ b} be a partition
of this interval. We define
< X(t), M (t) >P =
k−1
X
(X(ti+1 )−X(ti ))(M (ti+1 )−M (ti ))+(X(t)−X(tk ))(M (t)−M (tk ))
i=0
where tk is in a such way that tk < t < tk+1 . If when we refine the partition
there exists a limit < X(t), M (t) > (in probability) and this limit is independent of the considered partitions then we call this limit the joint quadratic
variation of X(t) and M (t).
The quadratic variation (if there exists) is a continuous process of bounded
variation. In addition it is bilinear, symmetric, positive defined and an kind
of Schwarz inequality holds
| < X(t), M (t) > − < X(s), M (s) > | ≤
p
p
< X(t) > − < X(s) > < M (t) > − < M (s) >.
To see where this process are well-defined, other properties and the Itô
integral (or the Stratonovich integral) considering these processes see [Ku].
In chapter 4 we need a generalized version of Itô formula. See [Ku2] for
the complete proof.
But before this statement we need some definitions:
Definition 13 (Local martingale). A stochastic process X(t) adapted3 to a
given filtration, Ft , is a local martingale if there exists increasing stopping
times, τn , such that X(min{t, τn }) is a martingale.
3
i.e., X(t) is Ft measurable for all t.
95
A.4. EXISTENCE AND UNIQUENESS FOR PDE
Definition 14 (Semimartingale). A stochastic process X(t) is a semimartingale if it is sum of a bounded variation process and a local martingale.
Theorem 35 (Itô formula (generalized)). Let F~ (x, t) be a process C 2 (in
x) and a semimartingale C 1 (in t). Let ~g (t) be a continuous semimartingale such that x and ~g (t) takes values into D ⊂ Rd . Then F~ (~g (t), t) is a
continuous semimartingale and satisfies
F (~g (t), t) − F (~g (0), 0) =
Z
d Z
X
t
F (~g (s), ds) +
0
i=1
t
0
∂F
(~g (s), s)dgi (s)
∂xi
+
d
1 X ∂2F
(~g (s), s) < dgi (s), dgj (s) >
2
∂xi ∂xj
+
d Z t
X
i,j=1
i=1
0
∂F
i
(~g (s), ds), g (t) .
∂xi
An idea to prove it is to see that, with a partition given, we have
X
F (gt , t) − F (g0 , 0) =
F (gtk+1 , tk+1 ) − F (gtk , tk ) =
and
X
X
F (gtk+1 , tk+1 ) ± F (gtk , tk+1 ) − F (gtk , tk )
F (gtk , tk+1 ) − F (gtk , tk ) ≈
Z
t
F (gs , ds).
0
For the other terms we do in a similar way.
A.4
Existence and uniqueness for PDE
Sometimes we use an existence and uniqueness of classical solution result
for certain PDE.
Theorem 36. Let à be the elliptic operator defined previously (see chapters 2, 3) with c ≥ 0 and Hölder-α and bounded coefficients. We consider
a bounded domain U satisfying the inner sphere property for all point on
the boundary. Let f be a bounded and Hölder-α function, and let g be a
continuous function. Then the problem
Ãu = f if x ∈ U,
u|∂U = g
has an unique classical solution u ∈ C(Ū ) ∩ C 2,α (U ).
See [GT] for the proof.
Appendix B
Itô integral
We have to make sense to the expression
Z
t
GdW
0
where G is a stochastic process and dW
dt is the standard white noise. Like
before we have a fixed probability space and a filtration1 adapted to the
given brownian motion.
Definition 15. Given [0, T ] a time interval. We define a partition P as a
sequence of times satisfying
0 = t0 < t1 < ... < tn = T.
We define the size of the partition as
|P | =
max |tk+1 − tk |.
0≤k≤n−1
To define the Itô integral we do as follows, first we consider step processes
and we approximate more general process with the step processes.
We consider the space
2
L (0, T ) = {G, G progressively measurable and such that E
1
Sequence of σ−algebras satisfying the following conditions
F(t) ⊂ F(s) si t < s
σ(W (t)) ⊂ F(t) ∀t ≥ 0
F(t) independent of σ(W (s) − W (t), ∀s ≥ t)
where σ(W (s) − W (t), ∀s ≥ t) is the future of the brownian motion.
96
Z
T
2
G dt
0
< ∞}.
97
Definition 16. We define a step process as a process G
L2 (0, T ) for which
P∈
n
there exists time intervals (tk , tk+1 ) such that G(t) = k=1 Gk 1(tk ,tk+1 ) with
Gk random variables F(tk )−measurables. To these processes we define the
Itô stochastic integral as
Z
T
GdW =
0
n−1
X
k=0
Gk (W (tk+1 ) − W (tk )).
(B.1)
We recall that it is a random variable.
From the definition we obtain that this operator is linear.
Using the linearity we have
E
Z
T
GdW
0
=E
n−1
X
k=0
Gk (W (tk+1 ) − W (tk )) = 0
(B.2)
using the brownian motion properties and the independence hypothesis in
the definition of filtration. We know that the brownian motion has bounded
quadratic variation, and so
2 Z T
Z T
2
G dt .
(B.3)
GdW
=E
E
0
0
Indeed,
E
Z
T
GdW
0
2 =
n−1
X
k,j=1
E[Gk (W (tk+1 ) − W (tk ))Gj (W (tj+1 ) − W (tj ))]
Si suponemos ahora que j 6= k entonces, aplicando la independencia, tenemos que esos términos se anulan, por lo que sólo quedan los términos j = k.
Utilizamos ahora que
E[W (tk+1 ) − W (tk )] = tk+1 − tk .
We have then the result
Theorem 37. The following properties hold for the Itô stochastic integral
of a step process:
1. Is linear.
2. We have that
E
Z
T
GdW
0
E
Z
0
T
GdW
2 =E
=0
Z
0
T
2
G dt .
98
APPENDIX B. ITÔ INTEGRAL
Now, given a general stochastic process in L2 (0, T ), we approximate it
with a sequence of step processes. This sequence of random variables will
be Cauchy in L2 (0, T ) (moreover, their limit will be G). So, in L2 (Ω) we
have
Z
Z
2
T
E
0
T
Gm − Gn dW
=E
0
(Gm − Gn )2 dt → 0
using the second property in the previous theorem. We can define the integral of the limit as the limit of the integrals in the L2 (Ω) sense. See [Du],
[Ev] for more details.
The stochastic integral of a L2 (0, T ) process satisfy the same properties
that for the step processes.
For the indefinite integral we have the following result. This result is
needed in the first chapter.
Theorem 38. Let
I(t) =
Z
t
GdW
0
then I(t) is a martingale. Moreover, it has continuous paths almost surely.
See [Ev] for a proof of the second part.
There is another tricky detail. We evaluated the Riemann sums in the
left hand side point of the interval, tk . If we evaluate the Riemann sum in
other point our result will change. The use of the left hand side point of the
interval is the Itô approach.
If we use the medium point of the interval, then we have the Stratonovich
integral. Their properties are very different, for example, in the Itó approach
we can not apply the usual chain rule to derive. In the Stratonovich sense we
can. However, to use the Itô formula gives us the results in this text. Other
advantage of the Itô approach is that the indefinite integral is a martingale.
There are a useful conversion formulas to change between them (see
[Ev]).
We are going to study the following example:
Z
T
0
1
1
W dW = W 2 (t) − (λ − )T
2
2
where λ is in a such way that we have the Riemann sum
mX
n −1
k=0
W (τk )(W (tnk+1 ) − W (tnk ))
with τk = (1 − λ)tk + λtk+1 . Thus
λ=0
99
is the Itô integral, and
λ = 1/2
is the Stratonovich integral. We observe the diference between the results.
We want to recall that all integration is in the L2 (Ω) sense, this is
E
mX
n −1
k=0
if n → ∞.
2 1
1
W (τk )(W (tk+1 ) − W (tk )) − W 2 (t) − (λ − )T
→0
2
2
Appendix C
Matlab code
C.1
Brownian motion paths
function [B,t]=browniano(N,M,T)
%This function approach M trajectories of the brownian motion
%in then [0,T] interval with step T/N
t=0:T/N:T;
B=zeros(N+1,M);
for j=1:M
for i=1:N
B(i+1,j)=B(i,j)+normrnd(0,sqrt(1/N));
end
end
C.2
Brownian bridge paths
function [B,P,t1]=puentebrowniano(N,M)
%This function approach M trajectories of the brownian bridge
%in then [0,1] interval with step 1/N
t1=0:1/N:1;
B=zeros(N+1,M);
P=B;
for j=1:M
for i=1:N
B(i+1,j)=B(i,j)+normrnd(0,sqrt(1/N)); %the brownian motion
% incrementsare normals with zero mean and standard
%deviation (1/N)^(1/2)
P(i,j)=B(i,j)-t1(i)*B(end,j);%we know P(t)=B(t)-t(B(T))
end
end
100
C.3. EULER METHOD FOR A SDE
C.3
C.3.1
101
Euler method for a SDE
1D case
function [t,Y]=sde1(a,b,Y0,T,N)
%this function approachs the solution
%of the SDE
%dY=a(Y,t)dt+b(Y,t)dW
%using the Euler method, i.e.
%Y(n+1)=a(Y(n))T/N+b(Y(N))(W(t(n+1))-W(t(n))
%a, b are the functions. Y0 is the initial value
%T is the final time and N is the number of nodes
t=0:T/N:T;
Y=zeros(1,N+1);
Y(1)=Y0;
for i=1:N
Y(i+1)=Y(i)+feval(a,Y(i),t(i))*T/N+feval(b,Y(i),t(i))...
...*normrnd(0,sqrt(1/N));
end
C.3.2
2D case
function [t,Y]=sde2(a,b,Y0,T,dt)
%this function approachs the solution
%of the SDE
%dY=a(Y,t)dt+b(Y,t)dW
%using the Euler method, i.e.
%Y(n+1)=a(Y(n))T/N+b(Y(N))(W(t(n+1))-W(t(n))
%a, b are the functions. Y0 is the initial value
%T is the final time and N is the number of nodes
t=0:T/dt:T;
Y=zeros(2,dt+1);%two dimensional diffusion
Y(:,1)=Y0;
for i=1:dt
Y(:,i+1)=Y(:,i)+feval(a,Y(:,i),t(i)).*T/dt+feval(b,Y(:,i),t(i))...
.*[normrnd(0,sqrt(1/dt)),normrnd(0,sqrt(1/dt))]’;
end
C.4
Monte-Carlo method for the laplacian
function u=lapbrow(g,n,N,B)
%this function solves the 2d laplacian
%with a given boundary value (g)
102
APPENDIX C. MATLAB CODE
%in [-1,1]^2.
%n^2 is the number of grid points.
%for each point (xi,yj) we calculate B
%brownian paths, and their g values and we take the mean.
%1/N is the time step
x=-1:1/(n-1):1;
y=x;
[X,Y]=meshgrid(x,y);
u=zeros(2*n-1,2*n-1);%the rows are y and the columns are x
for a=1:2*n-1
u(1,a)=feval(g,[x(a),1]);%y=1
end
for b=1:2*n-1
u(b,1)=feval(g,[-1,y(b)]);% x=-1
end
for c=1:2*n-1
u(2*n-1,c)=feval(g,[x(c),-1]);% y=-1
end
for d=1:2*n-1
u(d,2*n-1)=feval(g,[1,y(d)]);% x=1
end
M=zeros(B,2);
for i=2:2*n-2
for j=2:2*n-2
tau=[x(i),y(j)];%the brownian starts in a grid point
tau1=[0 0];
for k=1:B
while prod(abs(tau1)<[1,1])%this holds if
%we are in the square
tau1=tau+[normrnd(0,sqrt(1/N)),normrnd(0,sqrt(1/N))];
%a new point, the outer.
if prod(abs(tau1)<[1,1])
tau=tau1;%this saves the interior point
end
end
M(k,:)=tau1;
C.5. SILHOUETTE RECOGNITION
103
tau1=[0 0];
tau=[x(i),y(j)];
end
for l=1:B
G(l)=feval(g,M(l,:));%we evaluate g in the l-th row
end
G;
u(j,i)=mean(G);
clear G;
end
end
mesh(X,Y,u);title(’Solution’)
C.5
Silhouette recognition
This program uses a SOR method to solve the Poisson equation. The domain
(the silhouette) is introduced as a .png image, changing the colors and with
an ’if’ condition.
function [img,img2,u,t,cnt]=imagessor(tol,itmax,image)
%This program uses a SOR
%method to solve the Poisson equation
%tol is the tolerance
%itmax is the maximum number of iterations
%image is a .png image
tic
img=imread(image);
figure;imagesc(img);
input(’Press any key’)
img=double(img);
[H,W]=size(img)
w= 2 / ( 1 + sin(pi/(H+1)) );%our SOR parameter
for i=1:H
for j=1:W
img2(i,j)=abs(img(i,j)-255); %change the colors between them
end
end
img2;
figure; imagesc(img2);
input(’Press any key’)
clear i,j;
104
APPENDIX C. MATLAB CODE
%We start the algorithm
u=img2;
v=u;
err=1;
cnt=0;
while((err>tol)&(cnt<=itmax))
for i=2:H-1
for j=2:W-1
if (img2(i,j)==0)
else
v(i,j)=u(i,j)+w*(v(i-1,j) + u(i+1,j) + v(i,j-1)...
... + u(i,j+1) +1 - 4*u(i,j))/4;
E(i,j)=v(i,j)-u(i,j);
end
end
end
err=norm(E,inf);
cnt=cnt+1;
u=v;
end
u=flipud(u);
figure;imagesc(u);
mesh(u)
t=toc;
The programs to calculate norms, gradients, Φ and Ψ are the followings
function [Gux,Guy,NGu,t]=gradient(u)
%this program calculates the gradient and its norm
%Gux is the first component,
%Guy is the second component
%NGu is the gradient norm
tic
[H,W]=size(u);
for i=2:H
for j=2:W
Gux(i,j)=u(i,j)-u(i-1,j);
Guy(i,j)=u(i,j)-u(i,j-1);
NGu(i,j)=(Gux(i,j)^2+Guy(i,j)^2)^0.5;
end
end
t=toc;
C.6. MONTE-CARLO METHOD FOR PARABOLIC EQUATIONS 105
function [Phi,t]=phi(u,NGu)
%This progam calculates phi=u+NGu^2
%NGu is the gradient of u norm
tic
[H,W]=size(NGu);
for i=1:H
for j=1:W
Phi(i,j)=u(i,j)+NGu(i,j)^2;
end
end
t=toc;
function [Psi,t]=psiimages(u,Gux,Guy,NGu)
%This program calculates psi=-div(gradient(u)/norm(gradient(u))
%NGu is the gradient of u norm
%Gux is the first component,
%Guy is the second component
tic
[H,W]=size(NGu);
for i=2:H
for j=2:W
Psix(i,j)=((Gux(i,j)-Gux(i-1,j))*NGu(i,j)-Gux(i,j)...
...*(NGu(i,j)-NGu(i-1,j)))/NGu(i,j)^2;
Psiy(i,j)=((Guy(i,j)-Guy(i,j-1))*NGu(i,j)-Guy(i,j)...
...*(NGu(i,j)-NGu(i,j-1)))/NGu(i,j)^2;
Psi(i,j)=-Psix(i,j)-Psiy(i,j);
end
end
t=toc;
C.6
Monte-Carlo method for parabolic equations
function [t,u]=parabolic(T,M,N,a,b,dx,u0)
%Code to simulate M diffusions, with N time-grid nodes
%the diffusions start at x0. T is the final time (integer),
%a is the trasport, a function.
%b is the diffusion.
%u=u(T,x).
%dx is the spatial step
%u0 is the initial datum function
106
APPENDIX C. MATLAB CODE
%%%% Time
tic
%%%% Domain
x=0:dx:10;
y=x;
Nx=length(x);
Ny=length(y);
%%%% Initial datum
for i=1:Nx
for j=1:Ny
uo(i,j)=feval(u0,[x(i),y(j)]);
end
end
clear i,j;
figure(1)
mesh(uo);title(’Initial value’)
%%%%Time code
for l=1:T
%%%% Code to simulate the diffusions
for i=1:Nx %x step
for j=1:Ny %y step
%%%It simulate M diffusions started at [x(i),y(j)]
% We canculate u0 at these M point
% and save them as a vector.
for k=1:M
[t,Y]=sde2(a,b,[x(i),y(j)]’,T,N);
uu(k)=feval(u0,Y(:,end));
end
%%%% Expectation code
uoo(i,j)=mean(uu); %u(l,i,j)=u(l,x(i),y(j))
u(l,i,j)=uoo(i,j);
clear uu;
end
end
figure
mesh(uoo);title(’Evolution’)
end
C.7. CODE TO APPROXIMATE THE ∞−LAPLACIAN
107
t=toc;
C.7
Code to approximate the ∞−laplacian
function [u,cnt,t]=infinitylaplacian(B,f,tol,itmax,N)
%This program approximates the infinity laplacian
%with f as right hand side term
%and boundary values B on the boundary of the square
%[-1/2,1/2]^2
%tol is the tolerance
%itmax is the maximum number of iterations
%(N+1)^2 is the number of grid points
%B and f are matrices
tic;
u=B;
u1=u;
umax=u;
umin=u;
v=u;
%The discretization of -inflap(u)=f is
%2u_ij-sup_{i,j vecinos} u -inf u=f_ij
%then u_ij=(f_ij+sup u + inf u)/2
err=1;
cnt=0;
while((err>tol)&(cnt<=itmax))
for j=2:N
for i=2:N
%we calculate the supremums and the infimums
umax(i,j)=max([v(i-1,j) v(i,j-1) u(i+1,j) u(i,j+1)]);
umin(i,j)=min([v(i-1,j) v(i,j-1) u(i+1,j) u(i,j+1)]);
u1(i,j)=umax(i,j)+umin(i,j);% asi el inflap es 2u-u1
v(i,j)=(f(i,j)+u1(i,j))/2;
E(i,j)=v(i,j)-u(i,j);
end
end
err=norm(E,inf);
cnt=cnt+1;
u=v;
end
u=flipud(u’);
t=toc;
Index
∞−laplacian, 71, 73
σ−algebra, 22
p-laplacian, 79
p−laplacian, 71
’tug of war with noise’, 79
’tug of war’, 73
1-laplacian, 71, 80
de Broglie, 50
decision tree, 37
differential game, 71
diffusion, 19, 23, 35, 43
Dirichlet boundary conditions, 29, 46
domain, 24
dynamic programming principle, 75,
76, 78, 81
absolutely minimizing Lipschitz extension, 73
elliptic equations, 29
action, 50
elliptic operator, 26, 37
advection, 45
Euler equations, 56
existence and uniqueness (PDE), 95
Banach’s fixed point theorem, 70
existence and uniqueness (SDE), 15
Bismut-Elworthy-Li formula, 28
expectation, 22
Borel sets, 20
exponential, 46
Borel-Cantelli Lemma, 16
branching brownian motion, 46
Feller semigroup, 27
brownian bridge, 23, 42, 53
Feynman masure, 53
brownian motion, 7, 8, 11, 13, 35, 42, Feynman path integral, 48
87
Feynman’s formulation, 48
Feynman-Kac formula, 33, 43, 53
Cauchy problem, 41
Fisher equation, 46
central limit theorem, 8
flow, 18
chain rule, 55
functional, 50
Chapman-Kolmogorov equation, 49
functional integration, 48
Chapman-Kolmogorov equations, 21
fundamental solution, 28, 46
characteristic curves, 5, 60
characteristic form, 38
generator, 25, 26, 28, 30, 43
Chevichev inequality, 16
Green operator, 39
classical mechanics, 49
Gronwall inequality, 15
classical solution, 32, 38
concavity, 37
Hölder, 11, 18
contraction semigroup, 24
Haar basis, 88
convexity, 37
hamiltonian, 38, 54
critical point, 50
harmonic function, 32
cylinders, 20
heat equation, 8, 26, 58
108
109
INDEX
history, 73
Hopf-Cole transformation, 57
images, 37
inviscid Burgers equation, 60
inviscid Burgers equations, 57
irrotational flow, 59
Itô diffusion, 19, 28
Itô equations, 19
Itô formula, 19, 27, 30, 31, 33, 34, 44,
91
Itô integral, 13, 30, 96
operators, 23
ordinary equation, 18
Ornstein-Uhlenbeck equation, 12
parabolic equation, 8, 42
parabolic equations, 41
path, 10
path integral, 22
path space, 20
PDE, 30
phase space paths, 54
pinned Wiener measure, 20, 50
Planck constant, 50
joint quadratic variation, 94
Poisson equation, 35
population dynamics, 47
kernel, 41, 52, 54
Kolmogorov extension theorem, 20, potential, 33, 38, 49
progressively measurable, 14
21
Kolmogorov regularity theorem, 10, quadratic variation, 11, 94
18
quantum fields, 54
Kolmogorov’s regularity theorem, 89 quantum mechanics, 48, 49
Kolmogorov-Petrovskii-Piskunov equation, 46
random characteristic curves, 60
random walks, 35
Lévy processes, 28
Regularity for SDE, 18
lagrangian, 50, 54
regularization, 54, 55
Langevin equation, 12, 19
relativistic effects, 49
laplacian, 30
relativistic particles, 54
least action principle, 49
representation formula, 30
lema de Borel-Cantelli, 89
Riemann integral, 13
Lipschitz, 28, 33, 70
local martingale, 94
Schauder basis, 88
Schrödinger equation, 33, 51
Markov, 12, 19, 23
SDE, 13, 61, 75
martingale, 94
semigroup, 24, 43
matrix, 38
semimartingale, 95
mean value property, 32
sequence, 98
minimizer, 50
silhouette, 35
singularities, 60
Navier-Stokes equations, 56
space-time paths, 54
Neumann boundary conditions, 29
Spencer game, 80
non-local operator, 54
spin, 49
numerical methods, 82
state of a system, 49
step process, 97
ODE, 25, 61
stochastic equation, 18, 26
operador de Weber, 68
110
stochastic flow, 18, 41, 42
stochastic game, 75
stochastic integrals, 55
stochastic processes, 13
stopping time, 29, 38, 93
stopping times, 41
strategy, 73
Stratonovich equations, 19
Stratonovich integral, 13, 19, 98
sucessive approximation, 15
supermartingala, 78
thresholding, 37
total variation, 11
transition density, 24, 40, 41
transition function, 24
tug of war, 71, 76
unbounded domains, 39
viscid Burgers equation, 57, 58, 60,
64
viscid Burgers equations, 57
wave-particle duality, 50
white noise, 12, 87
Wiener measure, 9, 19, 23, 41, 53
INDEX
Bibliography
[A]
P.Amore, ’Alternative representation for non-local operators and
path integrals’, arXiv:hep-th/0701032v3.
[ACJ] G.Aronsson,M.Crandall, P.Juutinen, ’A tour on the theory of
absolutely minimizing functions’, Bull. Amer. Math. Soc., 41 (2004),
439-505.
[App]
D.Applebaum, Lévy Processes and Stochastic Calculus, Cambridge
Studies in Advanced Mathematics, 2004.
[BEJ] E.Barron,L.Evans,R.Jensen, ’Infinity laplacian, Aronsson’s
equation and their generalizations’, Trans.Amer.Math.Soc. 360
(2008), 77-101
[BF]
Y.N. Blagoveshcenskii y M.I. Freidlin, ’Some properties of
diffusion processes depending on a parameter’, Dokl. Akad. Nauk.
SSSR, 138 (1961), 508-511.
[C]
P.Constantin, ’An Eulerian-Lagrangian approach for incompressible fluids: local theory’. J.Amer.Math.Soc. 14 (2001), no.2, 263-278.
[CR]
K.L. Chung y A.M. Rao, ’Feynman-Kac functional and the
Schrödinger equation’. Seminar on Stochastic processes Birkhäuser,
1981.
[ChGR] F.Charro, J.Garcı́a y J.D.Rossi, ’A mixed problem for the infinity laplacian via tug-of-war games’. Calc.Var.Partial Differential
Equations 34 (2009), 307-320.
[CI]
P.Constantin y G.Iyer, ’A stochastic lagrangian representation
of the 3-dimensional incompressible Navier-Stokes equations’, aparecerá en Communications on Pure and Applied Mathematics.
[DMT] R. Dalang, C.Mueller, R. Tribe, ’A Feynman-Kac-type formula for the deterministic and stochastic wave equations and other
p.d.e.’s’, arXiv:0710.2861v1 [math.PR].
111
112
BIBLIOGRAPHY
[Du]
R.Durret, Stochastic calculus:
Press, 1996.
a practical introduction, CRC
[Dy]
E.B.Dynkin, Markov processes, vol I., Springer, 1965.
[E]
A.Einstein, Investigations on the Theory of the Brownian Movement, ed. por R.Förth, Dover, 1956.
[Ev]
L.C.Evans,
Stochastic
Differential
http://math.berkeley.edu/evans.
[Ev2]
L.C.Evans, Partial Diferential Equations, AMS, 2008.
[Ev3]
L.C.Evans, ’The 1-laplacian, the ∞−laplacian and differential
games’, Contemp. Math., 446 (2007), 245-254.
[Fe]
R.P. Feynman, ’Space-time approach to non-relativistic quantum
mechanics’, Rev. of Mod. Phys, 20 (1948), 367-387.
[Fe2]
R.P. Feynman, The principle of least action in quantum mechanics, ed por Laurie M. Brown, World scientific, 2005.
[FH]
R.P. Feynman,A.R. Hibbs, Quantum mechanics and Path Integrals, McGraw-Hill, 1965.
[F]
M. Freidlin, Fuctional integration and partial differential equations, Princeton university press, 1985.
[Fr]
A. Friedman, Stochastic differential equations and applications,
vol.1, Academic press, 1975.
[GJ]
J.Glimm, A.Jaffe Quantum physics, a functional integral point of
view second edition, Springer-Verlag, 1987.
Equations,
[GKMPPT] C.Graham,
T.Kurtz,
S.Meleard,
P.Protter,
M.Pulvirenti, D.Talay, Probabilistic Models for Nonlinear
Partial Differential Equations, Lecture Notes in Mathematics 1627,
Springer-Verlag, 1996.
[GGSBB] L.Gorelick, M.Galun, E.Sharon, R.Basri, A.Brandt,
’Shape Representation and Classification Using the Poisson Equation’, IEEE transaction on pattern analysis and machine intelligence, 28 (2006), no.12, 1991-2004.
[GT]
D.Gilbarg, N.S.Trudinger, Elliptic partial differential equations
of second order, Springer-Verlag, 1970.
[I]
S. Itô, Diffusion equations, American Mathematical Society, 1992.
BIBLIOGRAPHY
113
[Iy]
G.Iyer ’A stochastic Lagrangian formulation of the incompressible
Navier-Stokes and related transport equations’, Ph.D. Thesis, University of Chicago, 2006.
[Iy2]
G.Iyer, ’A stochastic lagrangian proof of global existence of the
Navier-Stokes equations for flows with small Reynolds number’,
Ann. Inst. H. Poincaré Anal. Non Linéaire 26 (2009), 181-189.
[IN]
G.Iyer, A.Novikov, ’The regularizing effects of resetting in a particle system for the Burgers equation’, (Preprint).
[K]
Kac, M. ’Wiener and integration in function spaces.’, Bull. Amer.
Math. Soc. 72 (1966) 53-68.
[K2]
Kac, M. ’On some connections between probability theory and differential and integral equations.’, Proceedings of the Second Berkeley
Symposium on Mathematical Statistics and Probability (1950), 189215.
[Kl]
J.R. Klauder, ’The Feynman path integral:
slice’,arXiv:quant-ph/0303034v1.
[Kle]
H. Kleinert, Path integrals in quantum mechanics,statistics, polymer physics and financial markets, fourth edition, World Scientific,
2006.
[KS1]
R.Kohn S.Serfaty, ’A deterministic-control-based approach to
motion by curvature’, Comm.Pur.Appl.Math, 59 (2006), 344-407.
[KS2]
R.Kohn S.Serfaty, ’Second order PDE’s and deterministic
games’, (Preprint)
[Ku]
H.Kunita, Stochastic differential equation and stochastic flows of
diffeomorphism, Lecture Notes in Math. vol 1097, Springer, 1984.
[Ku2]
H.Kunita, Stochastic flows and stochastic differential equations,
Cambridge studies in advanced mathematics, 1997.
an historical
[LSU] O. Ladyzenskaya, V.Solonnikov, N.Uralceva, Linear an
quasilinear equations of parabolic type, American Mathematical Society, 1968.
[McK] H.P. McKean, ’Application of brownian motion to the equation
of Kolmogorov-Petrovskii-Piskunov’, Comm. Pur. Appl. Math., 28
(1975), 323-331.
[MP]
P.Mörters y Y.Peres, Brownian motion, disponible en versión
electrónica en http://people.bath.ac.uk/maspm/book.pdf
114
BIBLIOGRAPHY
[NT]
Nagasawa, M. and Tanaka, H ’The principle of variation for
relativistic quantum particles.’, Séminaire de Probabilités, 35, 1-27
[Ob]
A. Oberman, ’A convergent difference scheme for the infinity laplacian: construction of absolutely minimizing Lipschitz extensions.’
Mathematics of computation, 74 (2004), 1217-1230.
[O]
B.Oksendal, Stochastic differential equations: an introduction with
applications, fifth edition, Springer-Verlag, 2000.
[PS]
Y.Peres, S.Sheffield, ’Tug-of-war with noise: a game-theoretic
view of the p-laplacian’, Duke Mathematical Journal, 145, (2008),
91-120.
[PSSW] Y.Peres, O.Schramm, S.Sheffield y D.Wilson, ’Tug-of-war
and the infinity laplacian’, aparecerá en Jour. Amer. Math. Soc.
[R]
S. Ramasubramanian, ’Reflecting brownian motion in a Lipschitz
domain and a conditional gauge theorem.’ The Indian Journal of
Statistics, 63 (2001), 178-193.
[S]
B.Simon, Functional integration and quantum physics, Academic
press, 1979.
[Z]
J. Zinn-Justin, Path integrals in quantum mechanics, Oxford
Graduate Texts, 2005.