MATH 7550-01
INTRODUCTION TO PROBABILITY
FALL 2011
Problems.
1 Can you produce an example
of a probability space and a sequence of events π΄π in it,
ββ
such that limπββ π (π΄π ) = 0, π=1 π (π΄π ) = β, π {inο¬nitely many of π΄π occur} = 1?
2 Can you produce an example
β βof a probability space and a sequence of events π΄π in it,
such that limπββ π (π΄π ) = 0, π=1 π (π΄π ) = β, π {inο¬nitely many of π΄π occur} β (0, 1)?
3 Can you produce an example
of a probability space and a sequence of events π΄π in it,
ββ
such that limπββ π (π΄π ) = 0, π=1 π (π΄π ) = β, π {inο¬nitely many of π΄π occur} = 0?
4 Let β 2 be the class of all rectangles {(π₯, π¦) : π < π₯ β€ π, π < π¦ β€ π}, ο¬nite or inο¬nite,
in the plane β2 (if, say, π = β, the inequality π₯ β€ β means the same as π₯ < β: no real
number π₯ is equal to β); let πͺ be the class of all open sets in β2 .
Prove that π(β 2 ) = π(πͺ).
5 Let π be a Borel set in βπ . Prove that the class of all Borel subsets of π is a π-algebra
in π.
Prove that this π-algebra is the same as that generated by all subsets of π that are
open in π.
6 Prove that if π and π are two random variables (on the same sample space, and
taking values in β1 ), then π + π is also a random variable.
The deadline for Problems 1 β 6 is September 16.
of measures π and π on a space (π, π³ ) such that π(πΆ) =
β« 7 Produce an example
β«
π1 (π₯) π(ππ₯) =
π2 (π₯) π(ππ₯), but π{π₯ : π1 (π₯) β= π2 (π₯)} β= 0. Show at which point
πΆ
πΆ
the βproofβ of Theorem 7.1 fails.
The deadline for Problem 7 is September 19.
8 Prove or disprove: Let πΉ (π‘), ββ < π‘ < β, be a right-continuous nondecreasing
function, πΉ (ββ) = 0, πΉ (+β) = 1. If πΉ has a ο¬nite number of jumps at the points
π₯π < π₯π+1 < ... < π₯π , and is constant on the intervals (ββ, π₯π ), (π₯π , π₯π+1 ), ...,
(π₯πβ1 , π₯π ), (π₯π , β), then this function is a distribution function of a discrete random
variable.
9 Prove or disprove the statement of the previous problem with the ο¬nite sequence of
jump points replaced by a countable one that is inο¬nite on one side: π₯π < π₯π+1 < ... <
π₯π < π₯π+1 < ..., or ... < π₯π < π₯π+1 < ... < π₯π ; or on both: ... < π₯β1 < π₯0 < π₯1 < π₯2
< ... , assuming that limπβββ π₯π = β β if the sequence is inο¬nite to the left, and that
limπββ π₯π = β if it is inο¬nite to the right.
10 Prove or disprove: Let πΉ (π₯), β β < π₯ < β, be a right-continuous nondecreasing
function, πΉ (β β) = 0, πΉ (+β) = 1. If
β
[πΉ (π₯) β πΉ (π₯β )] = 1,
π₯
1
then the function πΉ is a distribution function of a discrete random variable.
(The sum has, in appearance, an uncountable number of summands, but in fact all
of them except a countable number are equal to 0, because a nondecreasing function can
have only countably many discontinuities.)
11 Prove or disprove: Let πΉ (π‘), β β < π‘ < β, be a right-continuous nondecreasing
function, πΉ (β β) = 0, πΉ (+ β) = 1. If πΉ has jumps at the points that form a dense set in
β1 , then the function πΉ is a distribution function of a discrete random variable.
12 Prove or disprove: Let πΉ (π₯), β β < π₯ < β, be a nondecreasing function, πΉ (β β)
= 0, πΉ (+ β) = 1. If πΉ is continuous on β1 , it is a distribution function corresponding to
a density.
13 Prove or disprove: Let πΉ (π₯), β β < π₯ < β, be a nondecreasing function, πΉ (β β)
= 0, πΉ (+ β) = 1. If πΉ is continuous on β1 and piecewise continuously diο¬erentiable
(that is, β1 is divided into pieces by points (... <) π₯π < π₯π+1 < ... < π₯π (< ...) so that
πΉ is continuously diο¬erentiable on the open intervals between these points, to the left of
the smallest π₯π if it exists, and to the right of the greatest of them), it is a distribution
function corresponding to a density.
14 Prove or disprove: Let πΉ (π₯), β β < π₯ < β, be a right-continuous nondecreasing
function, πΉ (β β) = 0, πΉ (+ β) = 1. If
β«
β
πΉ β² (π₯) ππ₯ = 1,
ββ
then the function πΉ is a distribution function corresponding to a density.
15 Suppose we take a random permutation of numbers 1, 2, ..., π; that is, the sample
space Ξ© consists of all π! sequences π = (π₯1 , π₯2 , ..., π₯π ) such that {π₯1 , π₯2 , ..., π₯π } =
{1, 2, ..., π}; (of course, β± = π«(Ξ©)); and the probabilities are taken so that all diο¬erent
orders of the natural numbers from 1 to π are equally probable:
π {(π₯1 , π₯2 , ..., π₯π )} =
1
.
π!
The random variable π is equal to the number of numbers π, 1 β€ π β€ π, standing in their
own place: for π = (π₯1 , π₯2 , ..., π₯π ),
π(π) = π(π₯1 , π₯2 , ..., π₯π ) = #{π : π₯π = π}.
It was proved in the lecture that πΈ π = 1, πΈ π 2 = 2.
Is πΈ π 3 = 3?
The deadline for Problems 8 β 15 is Sep. 30.
16 Let π1 , π2 be two random variables taking the values in measurable spaces (ππ , π³π ),
π = 1, 2. Prove that π = (π1 , π2 ) is a random vector with values in the product
2
space (π1 × π2 , π³1 × π³2 ) (that is that the function π : π 7β
(β±, π³1 × π³2 )-measurable function from Ξ© to π1 × π2 ).
(
)
π1 (π), π2 (π) is an
17 Prove that β¬1 × β¬ 1 = β¬ 2 .
18
disprove that for every sequence π΄1 , π΄2 , ..., π΄π , ... of independent events
β© βProve or β
β
π ( π=1 π΄π ) = π=1 π (π΄π ).
19 Let π (π‘, π) be a (β¬[0, 1] × β±)-measurable function on [0, 1] × Ξ©, taking values 0, 1.
Is the following true for all such functions:
β«
β«
πΈ
π (π‘, β) #(ππ‘) =
πΈ π (π‘, β) #(ππ‘),
[0, 1]
β
[0, 1]
β
i. e., πΈ π‘β[0, 1] π (π‘, β) =
π‘β[0, 1] πΈ π (π‘, β)? (The counting measure # on [0, 1] is not
π-ο¬nite, so the positive answer does not follow from Fubiniβs Theorem β but still it could
be true.)
20 Prove that if real-valued random variables π1 , π2 have (absolutely) continuous joint
distribution with density ππ1 , π2 (π₯1 , π₯2 ), then each of them separately has a continuous
one-dimensional distribution, with densities
β« β
β« β
ππ1 (π₯1 ) =
ππ1 , π2 (π₯1 , π₯2 ) ππ₯2 ,
ππ2 (π₯2 ) =
ππ1 , π2 (π₯1 , π₯2 ) ππ₯1 .
ββ
ββ
21 Let π1 , π2 , ..., ππ , ... be an inο¬nite sequence of real-valued random variables. Show
that the event
{limπββ ππ β€ π₯}
belongs to the tail π-algebra β±β₯β for every real π₯ (lim denotes the upper limit; another
notation for it is lim sup; and we know that an upper limit, ο¬nite or inο¬nite, always exists).
22 Using the fact that a ο¬nite limπββ π₯π exists if and only if limπββ, πββ (π₯π β π₯π )
= 0, prove that the event
{there exists a ο¬nite limit
lim ππ } = {π : there exists a ο¬nite limit
πββ
lim ππ (π)}
πββ
belongs to the tail π-algebra.
23 Prove or disprove: the event
{ lim ππ = β β} β β±β₯β .
πββ
24 Prove that the random variable limπββ ππ (taking values in the extended real
line [ββ, β]) is measurable with respect to the tail π-algebra β±β₯β .
25 Prove or disprove: the event
{
}
π1 + ... + ππ
=0
πββ
π
lim
3
belongs to the tail π-algebra.
26 Prove: the event
{
the series
β
β
}
ππ converges
π=1
belongs to β±β₯β .
27 Prove or disprove, for nonnegative ππ : the random variable
with respect to the tail π-algebra.
ββ
π=1 ππ
is measurable
The deadline for Problems 16 β 27 is October 7.
28 Suppose ππ‘ , 0 β€ π‘ < β, are independent random variables with the same probability
density π(π₯). Prove that for no π‘0 β [0, β) does ππ‘ converge in probability to ππ‘0 as
π‘ β π‘0 .
29 Let π and π be two random variables taking nonnegative integer values. Prove or
disprove: If their generating functions ππ (π ), ππ (π ) coincide for π β [β1, 1], then their
distributions ππ and ππ coincide.
30 Give an example of a random variable whose moment generating function is equal
to β at all points except at 0.
31 Let a random variable π have the normal distribution with parameters (0, π). Prove
that all odd-numbered moments ππ = πΈ π π about zero are equal to 0, and for π = 2π
we have:
π2π = πΈ π 2π = (2π β 1)!! β
ππ ,
where (2π β 1)!! denotes the product of all odd numbers from 1 to 2π β 1.
The deadline for Problems 28 β 31 is October 17.
32 Prove that the normal distribution with parameters (π, π΅), where π΅ is a nonsingular
matrix, has the density
{ β
π(π) = const β
exp β 12
πππ (π₯π β ππ )(π₯π β ππ )},
π, π
where the matrix π = (πππ ) = π΅
β1
.
What is the constant in this formula equal to?
33 Let the two-dimensional random vector π have the normal
with zero
( distribution
)
1 1
expectation (πΈ π = 0) and the matrix of covariances π΅1 =
. Prove that the
1 1
distribution of this random vector is concentrated on some line in the plane (i. e. that
almost surely π1 π1 + π2 π2 = const for some (π1 , π2 ) β= (0, 0)).
Deduce form this that the normal distribution with parameters (0, π΅1 ) has no density
with respect to the two-dimensional Lebesgue measure π2 .
4
34 (Is the)result of the previous problem true if we replace the matrix π΅1 with
1 2
π΅2 =
?
2 4
35* (* means a non-obligatory problem). Prove or disprove: if the characteristic function
ππ (π‘) is diο¬erentiable at 0, then πΈβ£πβ£ < β.
HINT: A random variable having the standard Cauchy distribution with density π(π₯) =
β1
π /(1+π₯2 ) has no expectation; its characteristic function π (π‘) = πββ£π‘β£ is not diο¬erentiable
at 0, but this is, so to speak, touch and go: a little more, and it would be: at least the
one-sided derivatives are not inο¬nite. Couldnβt we try and consider a density πΛ(π₯) that
goes to 0 at ±β a little slower than π(π₯), but still the expectation does not exist? It could
be that for this density the characteristic function has zero derivative at π‘ = 0.
Better take your density (-ies) symmetric with respect to 0 (even functions): the
corresponding characteristic functions will be real-valued.
36 Let a four-dimensional
(
) random vector (π1 , ..., π4 ) have the normal distribution with
parameters 0, π΅ = (πππ ) . Find the fourth mixed moment π1111 = πΈ(π1 π2 π3 π4 ).
The deadline for Problems 32 β 36 is October 21.
37 Let π1 , π2 , ..., ππ , ..., π be distributions concentrated on nonnegative integers: for
all natural π
β
β
β
β
ππ {π} =
π{π} = 1.
π=0
π=0
Prove (or disprove) that if
lim ππ {π} = π{π}
πββ
for every π = 0, 1, 2, 3, ..., then ππ βπ€ π (π β β): convergence of the values of the
probability mass function implies weak convergence of distributions.
38 Let ππ be the normal distribution with parameters (ππ , ππ ), ππ > 0; suppose ππ β π,
ππ β 0 as π β β.
Prove that ππ βπ€ πΏπ (π β β), where πΏπ is the distribution concentrated at the
point π: πΏπ (πΆ) = 1 if πΆ β π, and = 0 if πΆ ββ π.
39 Prove or disprove that if ππ βπ π, then πππ βπ€ ππ .
40 Prove or disprove: if πππ βπ€ ππ , then ππ βπ π.
41 For every π, let the random variables ππ , ππ be independent; and let ππ βa. s. π,
ππ βa. s. π. Prove that then π and π are independent.
The deadline for Problems 37 β 41 is October 28.
42 Check that
{
π (π‘) =
1 β β£π‘β£,
0,
β£π‘β£ β€ 1,
β£π‘β£ > 1,
5
is a characteristic function of a continuous distribution on the real line. Find its density.
43 For a random variable π having the density obtained in the previous problem, prove
that πΈ β£πβ£ = β.
44 Let π1 , π2 , ..., ππ , ... be a sequence of independent random variables having the
distribution with the density found in Problem 42 (such a sequence exists by Theoπ1 + ... + ππ
rem 13.5). Let ππ =
. Does the sequence ππ converge in probability to a
π
constant as π β β? If yes, what is this constant equal to?
45 For the sequence of random variables of the previous problem, does the weak limit
of the distribution of ππ
(π€) lim πππ
πββ
exist? If yes, is the limiting distribution discrete or continuous, and what is its probability
mass function or its density?
We call a Markov chain time-homogeneous (or just homogeneous) if its transition
matrix is the same at all steps: π1 = π2 = ... = ππ = ... = π . For homogeneous Markov
chains the transition matrix π ππ = π πβπ (the (πβπ)-th power of the one-step transition
matrix π ) depends only on the diο¬erence π β π = π, and its entries πππ
π₯π¦ also depend
(π)
(π)
(π)
π
on this diο¬erence only: πππ
π₯π¦ = ππ₯π¦ , where ππ₯π¦ are the entries of the power π (ππ₯π¦ are
not the powers of ππ₯π¦ , the (π₯, π¦)-th entry of the matrix π , this is why we use (π) with
parentheses in the subscript).
A substantial part of the theory of Markov chains deals with the behavior of the π(π)
step transition probabilities ππ₯π¦ = π {ππ+π = π¦β£ππ = π₯} as the number of steps π β β.
It turns out sometimes (under some conditions) that these conditional probabilities have
the limit as π β β that does not depend on the number π₯ of the matrix row. This can
be interpreted as follows: under these conditions, the events {ππ+π = π¦} and {ππ = π₯}
(or: the random variables ππ+π and ππ ) become independent in the limit as π β β. Or:
under these conditions the events or random variables separated by a growing number of
steps (by a growing time interval) become independent in the limit.
I want you to solve several problems about this phenomenon (in the more intuitive
words, disappearance of dependence with lapse of time; in a more prosaic and more precise
(π)
words, existence, for each pair π₯, π¦ β π, of the limit limπββ ππ₯π¦ not depending on π₯ β
or existence of the limit limπββ π π being a matrix all of whose rows are identical to one
another).
(
)
0
1
46 Let π =
. Find the limit limπββ π π .
1/2 1/2
HINT: Represent the matrix π in the form π = π΄ β
π· β
π΄β1 , where π΄ is a non-singular
matrix, and π· a diagonal one.
6
47 Let
β
β
0
0
1
0
0
0
1 β
β 0
π =β
β .
1/2 0 1/2 0
0 1/2 0 1/2
Does the limit limπββ π π exist? Are the rows of this limiting matrix identical?
48 The same question for the matrix
β
0
β 2/3
π =β
0
1/3
1/3 0
0 1/3
2/3
0
0 2/3
β
2/3
0 β
β
1/3
0
(which is the same as that in the example with formula (25.5), only with 5 replaced with 4 β
which makes all the diο¬erence).
49* The same question for the matrix π given by formula (25.5).
The deadline for Problems 42 β 48 is November 4.
50* For a Markov chain with the transition matrix of Problem 46 , prove that
πΌβ (β±β€π , β±β₯π ) β 0 as π β β.
At what rate does it go to 0 (that is, e. g.: as some power of π β or of π β π; or
exponentially fast; or: faster than any exponential function)?
51 Let π0 , π1 , π2 , ..., ππ , ... be independent random variables having the normal distribution with parameters (0, 1) (the standard normal distribution). Let π1 = (π1 β π0 )2 , π2 =
π1 + π2 + ... + ππ
(π2 β π1 )2 , ..., ππ = (ππ β ππβ1 )2 , ... . Prove that there exists limπββ (π )
.
π
What is this limit?
We cannot use the laws of large numbers that we had for independent random variables: ππ are deο¬nitely
dependent. And we cannot use Theorem 26.4 β even if we check that πΌβ (β±ππ , πβ€π , β±ππ , πβ₯π ) β€
π½(π β π), π½(π) β 0 (π β β), because the random variables ππ are unbounded (just as normal
random variables).
52* For the random variables of the previous problem prove that the strong law of large
π1 + π2 + ... + ππ
numbers takes place: the sequence
converges almost surely.
π
HINT: Try to copy the proof of Theorems 16.3, 16.4: in the sum
π
β
πΈ[(ππ β πΈ ππ )(ππ β πΈ ππ )(ππ β πΈ ππ )(ππ β πΈ ππ )]
π, π, π, π=1
not all summands with π, π, π, π diο¬erent are equal to 0 as for independent random
variables, but the number of nonzero summands can be counted and estimated.
7
53 Let π be a continuous random variable with probability density
β§
β¨ 4 β π₯,
ππ (π₯) =
8
β©
0,
π₯ β [0, 4],
π₯β
/ [0, 4];
π = (π β 1)2 .
Find the conditional expectation πΈ{πβ£π = π¦}.
54 Let random variables π, π be independent and have normal distributions with parameters, respectively, (0, 1) and (0, 2); and π = π + π. Find the conditional distribution of π
under the condition π = π§. Find the conditional expectation πΈ(πβ₯π) and the conditional
((
)2 ° )
variance πΈ π β πΈ(πβ₯π) ° π .
55 Let π1 , π2 , ..., ππ , ... be independent random variables having each the same normal
distribution with parameters (0, 1).
Find (a version of) πΈ(π12 β₯β±β₯β ).
56 Let π and π be independent random variables with densities ππ (π₯), ππ (π¦); π = π + π.
Find the conditional density ππβ£π=π₯ (π§).
The deadline for Problems 51 β 56 is Nov. 11.
(
(
)
57 Prove that if the random variable π is π-measurable, πΈβ£πβ£ < β, πΈ β£πβ£ β
πΈ β£πβ£β₯π)
< β, then πΈβ£π πβ£ < β.
58 Let π1 , π2 , ..., ππ , ... be independent random variables with πΈ ππ = 0, πΈ ππ2 = ππ2 ;
ππ = π1 + ... + ππ .
2
β₯π1 , ..., ππ ).
Find πΈ(ππ+1
59 Let π = β€+ = {0, 1, 2, 3, ...};
[π(π β π‘)]π¦βπ₯ πβπ(π βπ‘)
π(π‘, π₯, π , π¦) =
(π¦ β π₯)!
if π₯, π¦ β β€+ , π¦ β₯ π₯, and 0 for π¦ < π₯, where π β₯ 0 (you see that this is, in fact, up to a
shift, the Poisson distribution with parameter π β
(π β π‘)). Check that the conditions β
1 ,
β
3 , β
4 are satisο¬ed.
The Markov process with transition probabilities (30.5) and with right-continuous
trajectories πβ (π) is called the Poisson process with parameter (the rate) π.
60 Prove: If ππ‘ , π‘ β₯ 0, is a Markov process on π = β€+ with transition probabilities
(30.5) (supposing that such a process exists), then for π β₯ π‘ the random variable ππ β ππ‘
has the Poisson distribution with parameter π β
(π β π‘).
61 For the same process prove that for 0 β€ π‘0 β€ π‘1 β€ ... β€ π‘π the random variables
ππ‘1 β ππ‘0 , ππ‘2 β ππ‘1 , ..., ππ‘π β ππ‘πβ1 are independent.
8
62 Let π = [0, β), π = β1 (and, of course, π³ = β¬ 1 ). Check that the density
2
1
π(π‘, π₯, π , π¦) = β
πβ(π¦βπ₯) /2(π‘βπ )
2π(π‘ β π )
(as a function of its density-argument π¦ this is the normal density with parameters
(π₯, π β π‘)) satisο¬es the Chapman β Kolmogorov equation (30.7).
63 For a Markov process ππ‘ , π‘ β [0, β), with the transition function given by (30.6)
with the density (30.8), and π (π‘, π₯, π‘, πΆ) = πΏπ₯ (πΆ) (assuming such a process exists) prove
that the diο¬erence ππ β ππ‘ for π β₯ π‘ has the normal distribution with parameters (0, π β π‘)
and that for 0 β€ π‘0 β€ π‘1 β€ ... β€ π‘π the diο¬erences ππ‘1 β ππ‘0 , ππ‘2 β ππ‘2 , ..., ππ‘π β ππ‘πβ1 are
independent.
64 Let the initial distribution π at time π‘0 = 0 be concentrated at the point π₯0 :
π(πΆ) = πΏπ₯0 (πΆ), and the ο¬nite-dimensional distributions be given by formula (31.1) with
π (π‘, π₯, π , πΆ) of the previous problem. Check that the joint distribution of ππ‘π βs (if the
stochastic process ππ‘ , π‘ β₯ 0, exists) is a normal one. With what parameters?
The deadline for Problems 57 β 64 is Nov. 18.
65 Find ππ₯ for the Poisson process (see Problem 59 ).
66 Find the matrix π΄ = (ππ₯π¦ ) for the Poisson process. Find the distribution of the
position of the process after the ο¬rst jump if it starts from the point π₯ = 5 at time 0.
67 Let ππ₯ > 0. Prove that the time π1 of the ο¬rst jump and the position ππ1 of the
process at this time are independent with respect to the probability measure ππ₯ .
β«
ππ₯π¦
HINT: Prove that ππ₯ {π1 β πΆ, ππ1 = π¦} =
ππ₯ πβππ₯ π‘ ππ‘ β
for every Borel set
ππ₯
πΆ
πΆ β [0, β) and π¦ β= π₯. To do this, start with πΆ = [π, π), 0 β€ π β€ π <β« β): such sets
for a semi-algebra, and if two ο¬nite measures ππ₯ {π1 β πΆ, ππ1 = π¦} and
πβππ₯ π‘ ππ‘ β
ππ₯π¦
πΆ
coincide on a semi-algebra, they coincide on the π-algebra β¬0, β generated by it. Then
use the equality ππ₯ {π1 β [π, π), ππ1 = π¦} = limββ0+ ππ₯ {π1β β [π, π), ππ1β = π¦}, where the
probability depending on β is expressed as a sum of some products: a sum of a ο¬nite
geometric progression.
68 Let a function π(π‘), π‘ β₯ 0, be continuous, let it have, for every π‘ β [0, β), a rightπ+
hand derivative β(π‘) =
π(π‘), which is also continuous on [0, β). Prove that then the
ππ‘
π
two-sided derivative
π(π‘) exists for all π‘ > 0.
β« π‘
ππ‘
HINT: Take πΛ(π‘) = π(π‘) β
β(π ) ππ . The function πΛ(π‘) is continuous and has a zero
0
right-hand derivative at every point; we have to prove that it is a constant. Now you take
any proof of the corresponding fact for the two-sided derivative, and try to adapt it to the
one-sided one.
9
69 Check that both kinds of Kolmogorov equations are satisο¬ed for the transition probabilities of the Poisson process (and this is a way how these probabilities can be found
starting with the matrix π΄).
(
)
(
)
β1 1
70 Let π = {0, 1}, π΄ =
. Find the solution π π‘ = π(π‘, π₯, π¦) π₯, π¦βπ of the
2 β2
π π‘
π π‘
matrix equation
π = π΄π π‘ or
π = π π‘ π΄ with the initial condition π 0 = πΌ. Find the
ππ‘
ππ‘
limits lim π‘ββ π(π‘, π₯, π¦).
β
β
β2 2
0
71 Let π = {0, 1, 2}, π΄ = β 1 β3 2 β . Do the limits lim π‘ββ π(π‘, π₯, π¦) exist? If
0
1 β1
they do, ο¬nd them by solving the system π β
π΄ = 0, π β
1 = 1 (π and 0 are row vectors,
1 the column vector with all components equal to 1).
72 Let π = {0, 1, 2, ..., π, ...}, ππ₯π₯ = β π, ππ₯, π₯+1 = π, where π is a positive constant, all
other ππ₯π¦ = 0. Find all solutions of the equation ππ΄ = 0. Does the discrete distribution
on π with the probability mass function π(π‘, π₯, β) converge as π‘ β β to some probability
distribution on π?
73 Let π , π be independent random variables with exponential distributions
with pa(
)
rameters π and π. Find the distribution of the random variable π (π) = min π (π), π (π) .
74 Let π = {0, 1, 2, ..., π, ...}, π00 = βπ, π01 = π, ππ, πβ1 = π, πππ = βπ β π,
ππ, π+1 = π for π > 0, all other ππ₯π¦ = 0, where π and π are positive constants (negative
numbers on the main diagonal of the inο¬nite matrix π΄, positive numbers on the diagonal
one row above the main one and one row below it, and the rest zeros; the Markov process
describes a system with one service and unrestricted queue).
Find all solutions of the equation π β
π΄ = 0. For π = π = 2, does the limiting
distribution for π(π‘, π₯, β) as π‘ β β exist; and if it does, what is this limiting distribution?
The deadline for Problems 65 β 74 is Nov. 30.
75 Do the tail π-algebras associated with the Markov chains of Problems 47 , 48
consist only of events of probability 0 and 1?
β
76 Assuming that ππ‘ = ππ‘ / π‘, 0 < π‘ < β, where ππ‘ is the Markov process of Problem 62 , is a Markov process, ο¬nd its transition density ππ (π‘, π₯, π , π¦) (it seems obvious
that it ought to have one).
β of the random variable ππ =
βHINT: ππ (π‘, π₯, π , π¦) should be the conditional density
ππ / π under the condition ππ‘ = π₯, that is,βππ‘ = π₯ β
π‘. We know the conditional distribution of ππ under the condition ππ‘ = π₯ β
π‘: itβs a normal distribution with such and
such parameters; we should be able to ο¬nd the conditional distribution of ππ‘ being a linear
function of ππ‘ too.
77 Check that ππ (π‘, π₯, π , π¦) satisο¬es the Chapman β Kolmogorov equation (all other
necessary properties are quite obvious).
10
78 Check that ππ‘ , 0 < π‘ < β, is a Markov process with the transition density found in
Problem 76 .
79 Let π(π₯) be a probability density on the real line. Does there exist a family of random
variables ππ‘ , π‘ β₯ 0, that are independent and have the distribution with the density π(π₯)?
80 Let the joint distribution of random variables π, π, π be normal with parameters
(0, π΅), where
β
β
2 β1 β1
π΅ = β β1 2 β1 β .
β1 β1 2
Is this distribution continuous? That is: does this distribution have a density ππππ (it is
not important whether this density is a continuous function or not)?
81 For the random variables of Problem 80 , ο¬nd the conditional distribution of the
random variable π under the condition that π is ο¬xed (with respect to π). That is, answer
the following questions:
Is the conditional distribution a discrete one, i. e.,
β
π {π β πΆβ₯π} =
ππ (π) ?
π: π₯π βπΆ
If yes, ο¬nd π π (π) = π {π = π₯π β₯π}.
Is the conditional distribution a continuous one, i. e.,
β«
π {π β πΆβ₯π} =
π πβ₯π (π₯) ππ₯ ?
πΆ
If yes, ο¬nd the conditional density ππβ₯π (π₯) (depending on π = π(π)).
Is the conditional distribution neither discrete nor continuous? If so, ο¬nd the conditional distribution function πΉπβ₯π (π‘) = π {π β€ π‘β₯π}.
Note that, in all these cases, we can try to ο¬nd not the things (conditional probabilities,
or conditional densities) under the condition that π is known (ο¬xed) (which is denoted
with β₯π ), but rather under the condition that π = π¦, for an arbitrary real π¦.
82 Let (π, π) have the uniform distribution on the square [0, 1]2 . Does the random
variable π = π 2 + π 2 have a continuous distribution?
83 Let the random variables π and π be independent, π having a discrete distribution
with π {π = 0} = π {π = 2} = π {π = 4} = 1/3, and π having the uniform distribution on
the interval [0, 3].
Find the distribution of the random variable π = π + π.
Is this distribution discrete? continuous? neither discrete nor continuous? According
to which of this cases takes place, ο¬nd the probability mass function of π, or its density,
or its distribution function πΉπ (π‘).
The deadline for Problems 75 β 83 is Dec. 9.
11
84 Let Ξπ for every π > 0 be a Poissonian random variable with parameter π. Prove
that Ξπ converges in probability to +β as π β β (the notation: Ξπ βπ β); that is, that
for every πΆ we have lim πββ π {Ξπ > πΆ} = 1.
85 Let π1 , π2 , ..., ππ , ... be independent random variables, ππ having the Poisson
distribution with parameter 1/π(π + 1).
Does the event {π1 + π2 + ... + ππ + ... < β} have probability equal to 1? equal to 0?
strictly between 0 and 1?
In the ο¬rst case, what is the distribution of the random variable π = π1 +π2 +...+ππ +...?
To be concrete: ο¬nd the probability π {π = 1} with accuracy up to 0.01.
In the second and in the third case: does the sum π1 +π2 +...+ππ βπ +β (convergence
in probability to inο¬nity, see the previous problem)?
86 Let π have a uniform distribution on the interval (0, 2); π = ln π. Find πΈ π. Is πΈβ£πβ£π
ο¬nite for all natural π?
87 Let π1 , π2 , ..., ππ , ... be independent random variables with uniform distribution on
the interval (0, 2).
βπ
Find πΈ π=1 ππ .
Which of the following events have probability 1, and which 0:
{
lim
πββ
π
β
}
{
ππ = 0 ;
lim
πββ
π=1
π
β
}
ππ = +β ;
π=1
π
β
{
}
there exists a ο¬nite positive limit lim
ππ ;
πββ
{
lim
π
β
πββ
π=1
ππ does not exist, lim
π
β
πββ π=1
{
lim
πββ
π=1
ππ = 0, lim
πββ
π
β
π
β
}
ππ = π3/2 ;
π=1
}
ππ = + β ?
π=1
βπ
HINT: Take ln π=1 ππ ; use the previous problem and the Strong Law of Large Numbers
(a concrete theorem of this group).
88 Prove or disprove: If a random variable π has a ο¬nite ο¬fth moment πΈ π 5 , it also has
a ο¬nite second moment πΈ π 2 .
89 Suppose the random variable( π has a ο¬nite
π1 , π2 are two arbitrary
° expectation;
)
°
random variables. We know that πΈ πΈ(πβ₯π1 , π2 ) π1 = πΈ(πβ₯π1 ) (almost surely, of course;
or: one of the versions of one of these conditional expectations is the other conditional
expectation).
Is the iterated conditional expectation
°
(
)
πΈ πΈ(πβ₯π1 )° π1 , π2
necessarily equal to πΈ(πβ₯π1 ) (a. s.)? Is it necessarily equal to πΈ(πβ₯π1 , π2 )? Or it may be
not equal (a. s.) to either of these two conditional expectations, but equal to some quite
diο¬erent random variable?
12
90 Let π and π be independent and have Poisson distribution with parameters, correspondingly, π and π. Find the conditional distribution of the random variable π if π + π
is known; i. e., ο¬nd
π {π = πβ₯π + π}.
91 Prove or disprove: Let ππ , π be random variables with ο¬nite expectations. Then if
the distribution of ππ converges weakly to that of π as π β β, then also the expectations
converge: πΈ ππ β πΈ π.
The deadline for Problems 84 β 91 is Dec. 16, noon.
92* Let ππ‘ , π‘ β₯ 0, be a Wiener process starting from a non-random point π₯0 β₯ 0 at
Λ π‘ = β£ππ‘ β£.
time 0: π0 β‘ π₯0 . Let π
Λ π‘.
For π‘ > 0, ο¬nd the probability density of the random variable π
Λ π‘.
93* Find the ο¬nite-dimensional densities of the stochastic process π
Λ π‘ a Markov process?
94* Is π
13
© Copyright 2026 Paperzz