Homework 7
MATH 8651 Fall 2015
Q1. Let µ1 , µ2 , µ3 be three probability measures on (Ω, F) such that
µ1 µ2 µ3 .
Show that
Solution: Note that
dµ1
dµ1
dµ2
=
×
a.s.
dµ3
dµ2
dµ3
dµ1
dµ2
×
dµ2
dµ3
is F-measurable. Since µ2 µ3 ,
Z
µ2 (A) =
A
dµ2
dµ3 ,
dµ3
for any A ∈ F.
Consequently, by Durrett exercise 1.6.8 (see Homework 2, Q9), for any measurable function g ≥ 0,
Z
Z
dµ2
dµ2 ,
(1)
gdµ2 = g
dµ3
Now since µ1 µ2 , for A ∈ F,
Z
µ1 (A) =
A
dµ1
dµ2 =
dµ2
Z
dµ1
dµ2
×
dµ3 ,
dµ2
dµ3
dµ1
in (1) to obtain the last equality above. The claim now follows from the
where we take g = 1A dµ
2
(almost sure) uniqueness of Radon-Nikodym derivatives.
Q2. Let X and Y be independent, identically distributed integrable random variables. Show that
E[X|X + Y ] =
X +Y
.
2
Solution: We claim that E[X|X + Y ] = E[Y |X + Y ] almost surely. To prove the claim, we need to
show that
Z
Z
Z
Z
E[X|X + Y ]dP =
XdP =
Y dP =
E[Y |X + Y ]dP for any B ∈ BR .
X+Y ∈B
X+Y ∈B
X+Y ∈B
X+Y ∈B
d
Note that since X and Y are i.i.d., (X, Y ) = (Y, X), implying that Ef (X, Y ) = Ef (Y, X) for any
measurable function f : R2 → R such that E|f (X, Y )| < ∞.
Applying the above identity to the function f (u, v) = u1u+v∈B and noting that f (X, Y ) is integrable
since |f (X, Y )| ≤ |X|, we obtain that
Z
Z
XdP = Ef (X, Y ) = Ef (Y, X) =
Y dP.
X+Y ∈B
X+Y ∈B
Hence, the claim follows. Now, almost surely,
E[X|X + Y ] = E[Y |X + Y ] =
1
2
E[X|X + Y ] + E[Y |X + Y ] = 21 E[X + Y |X + Y ] = 12 (X + Y ).
1
Q3. Let X and Y be integrable random variables and suppose that
E[X|Y ] = Y and E[Y |X] = X a.s.
Show that X = Y a.s.
[Comment: When X, Y ∈ L2 , the above result is an easy consequence of Durrett exercise 5.1.11 by
taking G = σ(X). But in the above problem, we only assume X, Y ∈ L1 . Hint: Work with quantities
like E[(X − Y )1(X>a,Y ≤a) ] + E[(X − Y )1(X≤a,Y ≤a) ] for a ∈ R.]
Solution: Since {Y ≤ a} ∈ σ(Y ) and E[X|Y ] = Y a.s.,
E[Y 1Y ≤a ] = E[E[Y |X]1Y ≤a ] = E[X1Y ≤a ],
implying that E[(Y − X)1Y ≤a ] = 0. Similarly, E[(X − Y )1X≤a ] = 0. Subtracting one from the other,
we obtain
0 = E[(X − Y )(1X≤a − 1Y ≤a )] = E[(X − Y )(1X≤a,Y >a − 1X>a,Y ≤a )],
which we can write as
E(X − Y )1X>a,Y ≤a + E(Y − X)1X≤a,Y >a = 0.
Note that both (X − Y )1X>a,Y ≤a ≥ 0 and (Y − X)1X≤a,Y >a ≥ 0. So,
(X − Y )1X>a,Y ≤a = 0 and (Y − X)1X≤a,Y >a = 0 a.s.
This implies that
P(X > a, Y ≤ a) = 0 and P(X ≤ a, Y > a) = 0 a.s.
Finally,
P(X > Y ) = P(X > a, Y ≤ a for some q ∈ Q) ≤
X
P(X > a, Y ≤ a) = 0.
q∈Q
Similarly, P(Y > X) = 0. Combining, we get P(X 6= Y ) = 0.
Q4. Let X be integrable and let G, H be two sub-σ-algebras of F. If σ(X, G) is independent of H, prove
that
E[X|σ(G, H)] = E[X|G] a.s.
[The notation σ(X, G) denotes the σ-field generated by σ(X) ∪ G. ]
Solution: We need to show that
Z
Z
E[X|G]dP =
C
XdP
C
for any C ∈ σ(G, H). For any A ∈ G and B ∈ H,
Z
Z
Z
Z
E[X|G]dP = E[X|G]1A 1B dP = E[X|G]1A dP × 1B dP
A∩B
Z
Z
Z
Z
=
XdP × 1B dP = X1A 1B dP =
XdP,
A
A∩B
where the second equality above follows from the fact E[X|G]1A ∈ G and 1B ∈ H and hence they
are independent, the third equality is implied by the definition of E[X|G] and the fourth equality is
a consequence of the fact X1A and 1B are independent since X1A is σ(X, G)-measurable while 1B is
H-measurable.
Define
C=
Z
C ∈ σ(G, H) :
C
Z
E[X|G]dP =
XdP
.
C
We have shown that C contains the π-system {A ∩ B : A ∈ G, B ∈ H}. It is straightforward to
see that C is also a λ-system (one can, for example, apply DCT to show that if Ci ∈ C and Ci ↑,
then ∪i Ci ∈ C). So, by Dynkin’s π − λ theorem, C contains the σ-algebra containing the π-system
{A ∩ B : A ∈ G, B ∈ H} ⊇ G ∪ H. Hence, C = σ(G, H).
2
Q5 Durrett 5.1.6
Solution: We consider the probability space (Ω := {a, b, c}, 2Ω , P ) where 2Ω is the set of all subsets
of Ω and P is the uniform measure on {a, b, c}. Take F1 = σ({a}) and F2 = σ({b}) and the random
variable X = 1{a} . Note that E[·|F1 ] is averaging over b, c and E[·|F2 ] is averaging over a, c. We
compute
E[E[X|F1 ]|F2 ] = E[1{a} |F2 ] = 21 1{a} + 12 1{c}
and
E[E[X|F2 ]|F1 ] = E[ 12 1{a} + 12 1{c} |F1 ] = 21 1{a} + 14 1{b} + 14 1{c} .
So, clearly E[E[X|F1 ]|F2 ] 6= E[E[X|F2 ]|F1 ].
Q6. 5.1.9
Solution: Note that
E(X − E[X|F])2 = E[X 2 ] − 2E[XE[X|F]] + E(E[X|F]2 ) = E[X 2 ] − E(E[X|F]2 ) = EVar(X|F),
where in the second equality we have used the fact that E[XE[X|F]] = EE(XE[X|F]|F) = E(E[X|F]2 ).
Now
Var(X) = E(X − EX)2 = E(X − E(X|F) + E(X|F) − EX)2
= E(X − E(X|F))2 + E(E(X|F) − EX)2
= E(X − E(X|F))2 + E(E(X|F) − EE(X|F))2
= EVar(X|F) + Var(E(X|F)),
where the cross term disappear because
E[(X − E(X|F))(E(X|F) − EX)] = EE[(X − E(X|F))(E(X|F) − EX)|F]
= E (E(X|F) − EX)E[(X − E(X|F))|F]
= E (E(X|F) − EX)(E(X|F) − E(X|F)) = 0.
Q7. 5.1.11 Solution: From Q6, we get
[E[(Y − X)2 ] = E(Y − E[Y |G])2 = E[Y 2 ] − E(E[Y |F]2 ) = E[Y 2 ] − E[X 2 ] = 0,
which implies that (Y − X)2 = 0 a.s. or, equivalently, X = Y a.s.
3
© Copyright 2026 Paperzz