Lecture 12
1
Brownian motion: the Markov property
Let C := C([0, ∞), R) be the space of continuous functions mapping from [0, ∞) to R, in
which a Brownian motion (Bt )t≥0 almost surely takes its value. Let F be the Borel σ-algebra
on C generated by the topology of uniform convergence on compact sets. On the measurable
space (C, F), we can impose a one-parameter family of probability measures (Px )x∈R (with
expectations (Ex )x∈R ), which are the laws of a Brownian motion starting with B0 = x almost
surely.
A natural filtration on C is to let
Fto = σ(ωs : ω ∈ C, 0 ≤ s ≤ t),
where ωs : C → R is the coordinate map at time s. However it turns out to be more convenient
to define a right-continuous filtration
\
Ft :=
Fso ,
(1.1)
s>t
which allows an infinitesimal peek into the future. Note that Ft is right-continuous since
Ft = ∩s>t Fs . One important advantage of the right-continuity is that a time of the form
τ = inf{t ≥ 0 : Bt ∈ O} for an open set O ⊂ R is a stopping time w.r.t. Ft , but not w.r.t. Fto .
We shall see that for the Wiener measure (i.e., the measure for a standard Brownian motion),
Ft and Fto differ only by sets of measure zero. Instead of using the filtration Ft , quite often
people also use the so-called augmented filtration for Brownian motion, which is obtained by
completing Fto with sets of measure 0 under the Wiener measure.
We now show that Brownian motion satisfies the Markov property, which intuitively means
that conditional on (Bt )0≤t≤s , the law of (Bs+t )t≥0 is the same as that of a Brownian motion
starting at Bs at time 0. A slight complication arises because with respect to the filtration
Ft , the above statement should really be modified to say that we condition on (Bt )0≤t≤s+ .
However, the theorem below will imply that this makes no difference.
Theorem 1.1 [Markov property] If s ≥ 0 and f : C → R is bounded and measurable, then
for all x ∈ R,
Ex f ((Bs+t )t≥0 )Fs = EBs f ((B̃t )t≥0 )
a.s.,
(1.2)
where B is a Brownian motion starting at x with Ex denoting its expectation, and given Bs ,
B̃ is an independent Brownian motion starting at Bs at time 0.
Proof. We sketch the proof. To verify (1.2), it suffices to show that
Ex 1A f ((Bs+t )t≥0 ) = Ex 1A EBs f ((B̃t )t≥0 )
∀ A ∈ Fs .
(1.3)
For ω ∈ C, we first fix f to be of the form
f ((ωt )t≥0 ) =
n
Y
i=1
1
fi (ωti ),
(1.4)
where 0 < t1 < · · · < tn , and fi : R → R are bounded and measurable. Fix h ∈ (0, t1 ). Then
o
for A ∈ Fs+h
of the form
A = {ω ∈ C : ωsi ∈ Ai , 1 ≤ i ≤ m},
where 0 < s1 < · · · < sm ≤ s + h and Ai are Borel sets, we can easily verify that
n
n
h Y
i
h
hY
ii
Ex 1A f ((Bs+t )t≥0 ) = Ex 1A
fi (Bs+ti ) = Ex 1A EBs+h
fi (B̃ti −h ) .
i=1
(1.5)
i=1
o
Fs+h
By the π-λ theorem, (1.5) holds for all A ∈
⊃ Fs . Given any A ∈ Fs , we can let h ↓ 0
in (1.5). By writing EBs [·] and EBs+h [·] explicitly as integration w.r.t. Gaussian densities, and
using the fact that a.s. limh↓0 Bs+h = Bs , we can then apply dominated convergence theorem
to deduce that
n
n
hY
i
hY
i
lim EBs+h
fi (B̃ti −h ) = EBs
fi (B̃ti ) .
h↓0
i=1
i=1
Therefore (1.2) holds for all f of the form (1.4). We can then use the monotone class theorem
to deuce (1.2) for all bounded measurable f .
One consequence of the Markov property of (Bt )t≥0 w.r.t. the filtration Ft is that
Theorem 1.2 For any bounded measurable function f : C → R and for all s ≥ 0 and x ∈ R,
Ex [f ((Bt )t≥0 ) | Fs ] = Ex [f ((Bt )t≥0 ) | Fso ].
(1.6)
Proof. By the monotone class theorem, it suffices to consider f of the form in (1.4), and we can
separate the indices ti into ti ≤ s and ti > s and write f ((Bt )t≥0 ) = g1 ((Bt )0≤t≤s )g2 ((Bt )t>s ).
Then
Ex [f |Fs ] = g1 (Bt )0≤t≤s Ex [g2 |Fs ].
By the Markov property (1.2), Ex [g2 |Fs ] = EBs [g2 ((B̃t )t≥0 )] ∈ Fso . Since Fso ⊂ Fs , this implies
that Ex [g2 |Fs ] = Ex [g2 |Fso ], and hence Ex [f |Fs ] = Ex [f |Fso ].
Setting f = 1A for A ∈ Fs in Theorem 1.2 shows that Fs and Fso are equivalent up to
sets of measure zero. Furthermore, if we set s = 0 and let f = 1A for A ∈ F0 , then we obtain
1A = Px (A) so that Px (A) ∈ {0, 1} for all A ∈ F0 . This is known as Blumenthal’s 0-1 law.
Theorem 1.3 [Blumenthal’s 0-1 law] For any x ∈ R, Px (A) ∈ {0, 1} for all A ∈ F0 .
The σ-field F0 is called the germ field, which is trivial by Blumenthal’s 0-1 law. We remark
that Blumenthal’s 0-1 law is valid for more general Markov processes under suitable continuity
assumptions. Here is an interesting application for Brownian motion.
Theorem 1.4 Let τ = inf{t > 0 : Bt > 0}. Then P0 (τ = 0) = 1.
Proof. Note that {τ = 0} ∈ F0 . Therefore by Blumenthal’s 0-1 law, P0 (τ = 0) ∈ {0, 1}. On
the other hand,
1
P0 (τ = 0) = lim P0 (τ ≤ t) ≥ lim P0 (Bt > 0) = .
t↓0
t↓0
2
Therefore we must have P0 (τ = 0) = 1.
Theorem 1.4 shows that almost surely, there is an infinite sequence of times tn ↓ 0 with
Btn > 0, and by symmetry, there also exists an infinite sequence of times sn ↓ 0 with Bsn < 0.
Thus by the a.s. continuity of Brownian sample paths, (Bt )t≥0 crosses level 0 infinitely often
in any neighborhood of t = 0. This is consistent with what we know from the law of the
iterated logarithm for Brownian motion near t = 0.
2
2
Brownian motion: the strong Markov property
The strong Markov property for Brownian motion basically says that conditioned on (Bs )0≤s≤τ
for any stopping time τ , (Bτ +t )t≥0 is distributed as a Brownian motion starting from Bτ at
t = 0. First let us define stopping times.
Definition 2.1 [Stopping times] A random variable τ : C([0, ∞), R) → [0, ∞] is called a
stopping time if for all t ≥ 0, {ω ∈ C : τ (ω) ≤ t} ∈ Ft . The corresponding stopped σ-field Fτ
is defined as {A ∈ F : A ∩ {τ ≤ t} ∈ Ft ∀ t ≥ 0}.
Note that because of the right continuity of (Ft )t≥0 , {τ ≤ t} ∈ Ft for all t ≥ 0 if and only if
{τ < t} ∈ Ft for all t ≥ 0.
Example 2.2 [Examples of stopping times]
1 If O ⊂ R is an open set, then τO = inf{t ≥ 0 : Bt ∈ O} is a stopping time.
2 If K ⊂ R is a closed set, then τK = inf{t ≥ 0 : Bt ∈ K} is a stopping time.
3 If Tn is a sequence of stopping times and Tn ↑ T (resp. Tn ↓ T ), then T is a stopping
time.
4 If S and T are stopping times, then S ∨ T and S ∧ T are stopping times.
We leave the verification of these assertions as an exercise.
We now state formally the strong Markov property for Brownian motion.
Theorem 2.3 [Strong Markov property] Let fs (ω) : [0, ∞)×C → R be jointly measurable
in s ∈ [0, ∞) and ω ∈ C. If τ is a stopping time, then for all x ∈ R,
Ex fτ ((Bτ +t )t≥0 )Fτ = EBτ fτ ((B̃t )t≥0 )
a.s.,
(2.7)
where B is a Brownian motion starting at x, and given Bτ , B̃ is an independent Brownian
motion starting at Bτ .
The strong Markov property can be proved by first proving the statement for stopping times
which take values in a countable discrete set, which follows from the Markov property. General
stopping times can then be approximated by discrete stopping times. The only thing that
we need to carry out this approximation is the continuity of the Brownian motion transition
kernel in both space and time, which is known as the Feller property for Markov processes.
See [1, Sec. 7.3] for a detailed proof.
We now apply the strong Markov property to deduce some interesting properties for Brownian motion.
Theorem 2.4 [Zeroes of Brownian motion] Let (Bt )t≥0 be a standard Brownian motion
with B0 = x ∈ R. Let Z = {t ≥ 0 : Bt = 0} be the zero set of B. Then almost surely, Z is
a perfect set, i.e., Z is closed and every point in Z is an accumulation point. Furthermore,
almost surely Z has Lebesgue measure 0.
3
Proof. Since Bt is almost surely continuous in t, Z is a closed set. If with positive probability,
Z contains an isolated point tω so that Bs 6= 0 for all s ∈ (tω − ω , tω + ω ) for some ω
depending on B, then we can find a, δ ∈ Q with a > δ such that with positive probability, B
has a unique zero tω ∈ (a − δ, a + δ). Let τ := inf{t ≥ a − δ : Bt = 0}, which is a stopping
time. Since Px (Ba−δ = 0) = 0, we have Px (τ = a − δ) = 0. Therefore by assumption, with
positive probability τ = tω , and it is an isolated zero of B in (a − δ, a + δ). However, this
is not possible by the strong Markov property, which implies that conditional on (Bs )0≤s≤τ ,
(Bτ +s )s≥0 is distributed as a Brownian motion B̃ starting at 0, and by Blumenthal’s 0-1 law,
t = 0 is a.s. an accumulation point of the zeros of (B̃t )t≥0 .
The fact that Z almost surely has Lebesgue measure zero follows from Fubini’s theorem,
since
i Z T
hZ T
1{Bt =0} dt =
P(Bt = 0)dt = 0
for any T > 0.
E
0
0
Theorem 2.5 [Reflection principle] Let (Bt )t≥0 be a standard Brownian motion with B0 =
0. Let Mt := sup0≤s≤t Bs be the running maximum of Bt . Then for any a > 0 and t > 0,
P(Mt ≥ a) = 2P(Bt ≥ a) = P(|Bt | ≥ a).
(2.8)
Note that with B0 = 0 and a > 0, {Mt ≥ a} = {τa ≤ t}, where τa := inf{t ≥ 0 : Bt = a}.
Proof. Since τa is a stopping time, so is τa ∧ t, and hence by the strong Markov property,
1
E[1{Bt ≥a} |Ft∧τa ] = 1{τa ≤t} Pa (B̃t−τa ≥ a) = 1{τa ≤t} ,
2
where B̃ is an independent Brownian motion starting from a. Taking expectation on both
sides then yields (2.8).
3
Brownian motion as a martingale
Most of the results for discrete time martingales, such as Doob’s inequality, upcrossing inequality, martingale convergence theorems, optional stopping theorem, have analogues for
continuous time martingales, provided we assume that the continuous time martingale has
sample paths which are right continuous with left hand limits, known as càdlàg paths by its
abbreviation from French. In particular, the optional stopping theorem states the following.
Theorem 3.1 [Optional stopping theorem] Let (Xt )t≥0 be a continuous time martingale
with cádlág sample paths adapted to a right-continuous filtration (Ft )t≥0 , i.e., for any 0 ≤
s ≤ t < ∞, E[Xt |Fs ] = Xs a.s. Then for any two stopping times 0 ≤ σ ≤ τ , if (Xt∧τ )t≥0 is
uniformly integrable, then
E[Xτ |Fσ ] = Xσ
a.s.
(3.9)
In particular, if τ is a bounded stopping time, then (Xt∧τ )t≥0 is uniformly integrable.
We collect here some most common functionals of Brownian motion which are martingales.
Theorem 3.2 [Martingales for Brownian motion] The following functionals of (Bt )t≥0
2
are martingales w.r.t. (Ft )t≥0 : Bt , Bt2 − t, Bt3 − 3tBt , eθBt −θ t/2 .
4
Remark. Theorem 3.2 can be easily verified using the independent Gaussian increment
2
∂
properties of (Bt )t≥0 . In fact, for any function f (t, x) with ∂t
f + 12 ∂∂2 x f ≡ 0, f (t ∧ τL , Bt∧τL )
is a martingale for any L > 0, where τL := inf{s ≥ 0 : |Bs | ≥ L}. This includes the case when
f (t, x) = f (x) is a harmonic function. The statement can be proved by Ito’s formula.
Remark. Applying the optional stopping theorem to the martingale Bt , we can derive the
exit probabilities of Bt from the end points of an interval [a, b]. Using Bt2 − t, we can compute
the expected exit time of Bt from [a, b].
4
Infinitesimal generator of a Brownian motion
For a discrete time Markov chain X with state space S and transition matrix Π, the operator
Π − I plays an important role. For any bounded measurable f : S → R, f (Xn ) − f (X0 ) −
Pn−1
i=0 (Π − I)f (Xi ) is in fact a martingale. Furthermore, if (Π − I)f = 0, in which case we call
f a harmonic function for the Markov chain with transition matrix Π, f (Xn ) is a martingale.
We now introduce the continuous time analogue of Π − I for Brownian motion, called the
infinitesimal generator of Brownian motion.
If (Xt )t≥0 is a continuous time Markov process with state space S which is a complete
separable metric space, then the semigroup associated with X is defined as the family of
operators (St )t≥0 acting on the class of bounded continuous functions f ∈ Cb (S, R) with
(St f )(x) = Ex [f (Xt )] for all x ∈ S. By the Markov property, it is easy to check that St Ss f =
Ss St f = St+s f , which together with the lack of inverse accounts for the name semigroup. The
infinitesimal generator is then defined via
Lf = lim
h↓0
Sh f − f
.
h
(4.10)
The class of f ∈ Cb (S, R) for which the limit Lf ∈ Cb (S, R) exists and the convergence takes
place in Cb (S, R) with sup norm is called the domain of the generator L. The generator L
together with its domain uniquely determine the Markov process. Although one could also
consider the adjoint semigroup (St∗ )t≥0 which acts on probability measures on S, functions
are easier to handle because measures can in general be singular.
Let us check that 21 ∆, where ∆ denotes the Laplacian, is the infinitesimal generator for
Brownian motion, and the class of twice continuously differentiable functions with compact
support, Cc2 , belongs to the domain of 21 ∆. Let f ∈ Cc2 . Then by Taylor expansion,
Z x+Bh Z y
h
i
1 00
0
2
(Sh f )(x)−f (x) = E[f (x+Bh )−f (x)] = E f (x)Bh + f (x)Bh +
(f 00 (z)−f 00 (x))dzdy .
2
x
x
In particular,
i
h Z x+Bh Z y
1 00
|f 00 (z) − f 00 (x)|dzdy ≤ E φ(|Bh |)Bh2 ,
(Sh f )(x) − f (x) − f (x)h ≤ E
2
x
x
where φ(r) = supx∈R,|y−x|≤r |f 00 (y)−f 00 (x)| is bounded with φ(r) ↓ 0 as r ↓ 0 by the assumption
that f ∈ Cc2 . Since
√
E[φ(|Bh |)Bh2 ]
lim
= lim E[φ( h|B1 |)B12 ] = 0,
h↓0
h↓0
h
it follows that h−1 (Sh f − f ) converges in sup-norm to 12 f 00 as h ↓ 0. The same argument
applies to higher-dimensional Brownian motions. Ito’s formula is also based on such Taylor
expansions.
Similar to the discrete time setting, if f is bounded and ∆f = 0, then f (Bt ) is a martingale.
5
5
Transience and recurrence of a Brownian motion in Rd
One-dimensional Brownian motion is clearly recurrent in the sense that it visits every point
in R infinitely often, as can be seen from the law of the iterated logarithm. A d-dimensional
(1)
(d)
Brownian motion is simply an Rd -valued process Bt := (Bt , · · · , Bt ), where the coordinates
are independent one-dimensional Brownian motions. It turns out that in d ≥ 3, Bt is transient
in the sense that |Bt | → ∞ a.s.; and in d = 2, Bt a.s. returns to each open set infinitely often
as t ↑ ∞, but does not return to any fixed deterministic point. The proof rests on finding a
suitable martingale.
Recall that the fundamental solution for the Laplacian ∆ is
d = 2,
log |x|
φ(x) =
1
d−2
d ≥ 3.
|x|
In particular, ∆φ(x) = 0 for all x 6= 0. Then for any 0 < r < R < ∞, φ(Bt∧τr ∧τR ) is actually
a bounded positive martingale, where τa = inf{s ≥ 0 : |Bs | = a}. It is not difficult to show
that if B0 = x with |x| ∈ (r, R), then τr ∧ τR < ∞ almost surely. Therefore by the optional
stopping theorem,
φ(x) = Px (τr < τR )φ(r) + Px (τR < τr )φ(R),
which implies that
log R − log |x|
φ(R) − φ(x)
log R − log r
Px (τr < τR ) =
=
2−d − |x|2−d
φ(R) − φ(r)
R
R2−d − r2−d
d = 2,
(5.11)
d ≥ 3.
For d = 2, as we let R ↑ ∞, we find that Px (τr < ∞) = 1. Therefore by the strong Markov
property, Bt must visit the ball {z ∈ R2 : |z| ≤ r} infinitely often a.s. as t ↑ ∞. Since r > 0
can be arbitrary, Bt must visit each open set in R2 infinitely often a.s. If we fix R and let
r ↓ 0, then we find that Px (τR < τ0 ) = 1. Since R can be arbitrary, this implies that Bt
almost surely never visits the origin, or any other deterministic point.
d−2
r
For d = 3, fixing r and letting R ↑ ∞ gives Px (τr < ∞) = |x|
d−2 < 1. Therefore Bt is
transient. It is easy to see that there is zero probability that Bt stays confined in a finite ball
for all time. Therefore for R > |B0 |, τR < ∞ almost surely. By the strong Markov property,
d−2
PBτR [τr < ∞] ≤ Rr d−2 , which tends to 0 as R ↑ ∞. Therefore, almost surely, |Bt | > r for all t
sufficiently large. Since r > 0 is also arbitrary, this implies that |Bt | → ∞ as t → ∞.
In fact, |Bt | for a d-dimensional Brownian motion is a process on [0, ∞) called the ddimensional Bessel process, which can be constructed from a 1-dimensional Brownian motion
by adding a location-dependent drift to ∞.
References
[1] R. Durrett, Probability: Theory and Examples, 2nd edition, Duxbury Press, Belmont,
California, 1996.
6
© Copyright 2026 Paperzz