Probability Theory - Rutgers Math Department

Chapter 2
Probability Theory
Chapter 2 outlines the probability theory necessary to understand this text. It is
meant as a refresher for students who need review and as a reference for concepts
and theorems applied throughout the text. Key examples in this chapter anticipate
biological applications developed later in the text. Section 2.1 provides all the
background necessary for Chapter 3 on population genetics. We recommend reading
this section before going on, and then returning to the later parts of Chapter 2 when
and if necessary.
2.1
Elementary probability; random sampling
This section is a review of basic probability in the setting of probability spaces, with
applications to random sampling.
2.1.1
Probability spaces
Probability theory offers two frameworks for modeling random phenomena, probability spaces and random variables. In practice, random variables provide a more
convenient language for modeling and are more commonly used. But probability
spaces are more basic and are the right setting to introduce the fundamental principles of probability.
A probability space consists of:
(i) a set Ω called the outcome space;
(ii) a class of subsets of Ω called events; and,
(iii) a rule P that assigns a probability P(U ) to each event U .
This structure is a template for modeling an experiment with random outcome;
the set Ω is a list of all the possible outcomes of the experiment; a subset U of Ω
represents the ‘event’ that the outcome of the experiment falls in U ; and P(U ) is
the probability event U occurs. P is called a probability measure, and it must satisfy
the following properties, called the axioms of probability:
1
2
CHAPTER 2. PROBABILITY THEORY
(P1) 0 ≤ P(U ) ≤ 1 for all events U ;
(P2) P(Ω) = 1;
(P3) If U1 , U2 , . . . , Un are disjoint events, meaning they share no outcomes in common,
P(U1 ∪ · · · ∪ Un ) = P(U1 ) + · · · + P(Un ).
(2.1)
(P4) More generally, if U1 , U2 , U3 , . . . is an infinite sequence of disjoint events, then
P
∞
[
!
Ui
=
∞
X
P(Ui ).
(2.2)
1
i=1
These axioms are imposed so that an assignment of probabilities conforms with
our intuitive notion of what a probability means, as a measure of likelihood. For
example, if U and V are events that do not overlap, the probability, the likelihood
of U ∪ V occurring ought to be the sum of the probabilities of U and V . Axioms
(P3) and (P4) impose this principle at greater levels of generality. They are called
additivity assumptions. We shall see in a moment that (P3) is a special case of (P4),
but we have stated them both for emphasis.
Examples 2.1.
(a) (Die roll.) Consider the roll of fair die. In this case, outcome space is
Ω = {1, 2, 3, 4, 5, 6}. The event of rolling a particular value i is the singleton subset
{i} of Ω, and the die is fair if P({i}) is the same value a for all i. Axioms (P1) and
P
(P3) together then require 1 = P(Ω) = 61 P({i}) = 6a, and thus P({i}) = a = 1/6
for each i in Ω. Now consider an event such as rolling an even die, which is the
subset {2, 4, 6}. Since it is the union of the singleton events {2}, {4}, and {6},
axiom (P3) implies P({2, 4, 6}) = 1/6 + 1/6 + 1/6 = 1/2. By the same reasoning, for
any subset U of {1, 2, 3, 4, 5, 6}, P(U ) = |U |/6, where |U | denotes the size of U . All
this is pretty obvious, but the student should note how it follows from the axioms.
(b) (Discrete outcome space.) A probability space is called discrete if Ω is a
finite set or a set, Ω = {ω1 , ω2 , . . .}, whose elements can be listed as a sequence.
(The latter type of set is called countably infinite.) When Ω is discrete, a probability
space on Ω can be completely specified by assigning a probability, pω = P({ω}), to
each singleton event {ω}. The additivity axioms (P3) and (P4) then determine the
probability of all other events. Since any subset U of Ω is the disjoint, finite or
countable union of the singleton subsets of its elements,
P(U ) =
X
pω .
ω∈U
(Here, the notation indicates that the sum is over all ω in U .) The only restrictions
on the assignment {pω ; ω ∈ Ω} are that 0 ≤ pω ≤ 1 for all ω, which must hold
2.1. ELEMENTARY PROBABILITY; RANDOM SAMPLING
3
because of axiom (P1), and that
1 = P(Ω) =
X
pω ,
ω∈Ω
which must hold because of axiom (P2).
For example, the most general model of a single roll of a die, possibly unfair, is
P
specified by a vector (p1 , . . . , p6 ), where 0 ≤ pi ≤ 1 for each i, 61 pi = 1, and pi
represents the probability of rolling i.
c) (A random point chosen uniformly from the interval (0, 1).) Consider the
interval (0, 1) as an outcome space. There are many ways to define a probability
measure on subsets of (0, 1). One simple
approach
is just to let P(U ) equal the
length of U . Thus, if 0 ≤ a < b ≤ 1, P (a, b) = b − a. Then P((0, 1)) = 1, so that
axiom (P2) is satisfied, and axiom (P1) is satisfied since the length of a subset of
(0, 1) is between 0 and 1. Axioms (P3) and (P4) are discussed below.
In this model, the probability of an event depends only on its length, not where it
is located inside (0, 1). This captures the idea that no points are statistically favored
over others, and so we say this probability space models a point drawn uniformly
from (0, 1). Intuitively, each point in (0, 1) is likely to be selected, although strictly
speaking, this does not make sense, because the probability of any specific point is
0! Indeed, a singleton subset {b} is the same thing as the interval [b, b], which has
length 0, and so P({b}) = 0 for each 0 ≤ b ≤ 1. In this case we cannot define the
probability of any event by summing over the probabilities of its individual elements,
as we did when Ω is discrete.
Notice that our construction of P is not really complete, because it lacks a general
definition of the length of a set. Of course, if V is an interval from a to b, its length
is b − a, whether V includes its endpoints or not. If V is a finite disjoint union
of intervals, we define its length
to be the sum of the lengths of its constituent
subintervals. For example, P (0, .5] ∪ [.75, 1) = 0.5 + 0.25. If we define P just on
finite disjoint unions of intervals, then axiom (P3) is true by construction. Can we
extend the definition of P to arbitrary intervals in such a way that axiom (P4) holds
true? The answer is: No! (The proof of this is advanced and assumes an axiom
of set theory called the axiom of choice.) However, it is possible to extend P to a
class of subsets of (0, 1) that includes all finite disjoint unions of subintervals but
does not include all subsets, in such a way that (P4) holds. This is an advanced
topic we will not and do not need to go into, because it is never a concern for the
practical applications we will face. Be aware, though, that the rigorous construction
of a probability space may require restricting the class of subsets allowed as events
to which probabilities are assigned.
Two simple consequences of the probability axioms are repeatedly used. Their
proofs are left as exercises for the reader.
4
CHAPTER 2. PROBABILITY THEORY
(1) If U is an event, let U c denote the complement of U , that is all elements in Ω
that are not in U . Then
P(U c ) = 1 − P(U ).
(2.3)
In particular, since the empty set, ∅, equals Ωc , P(∅) = 1 − P(Ω) = 0. (The
empty set, ∅, should be thought of as the event that, in a trial of the experiment, nothing happens, which cannot be.)
(2) (Inclusion-Exclusion Principle) P(A ∪ B) = P(A)+ P(B)− P(A ∩ B).
The next fact is a consequence of (P4); it is less elementary and its proof is omitted.
∞
[
if A1 ⊂ A2 ⊂ · · · , then P(
An ) = lim P(An ).
n→∞
1
(2.4)
We can now see why (P3) is a consequence of (P4). Let U1 , . . . , Un be disjoint
events, and set Um = ∅ for m > n. Then U1 , U2 , . . . is an infinite sequence of
disjoint sets. Since ∪n1 Uk = ∪∞
1 Uk , and since P(Um ) = P(∅) = 0 for m > n, axiom
(P4) implies
P
n
[
Uk = P
1
∞
[
Uk =
∞
X
1
P(Uk ) =
1
n
X
P(Uk ),
1
which proves (P3).
2.1.2
Random sampling
In this section and throughout the text, if U is a finite set, |U | denotes its cardinality,
that is, the number of elements in U .
The most basic, random experiment, in both statistical practice and probability
theory, is the random sample. Let S denote a finite set, called the population. A
(single) random sample from S is a random draw from S in which each individual
has an equal chance to be chosen. In the probability space for this experiment, the
outcome space is S, and the probability of drawing any particular element s of S is
P({s}) =
1
.
|S|
For this reason, P is called the uniform probability measure on S. It follows that if
U is any subset of S,
P(U ) =
X
s∈U
P({s}) =
X 1
s∈U
|S|
=
|U |
.
|S|
(2.5)
In many applications, each individual in the population bears an attribute or descriptive label. For example, in population genetics, each individual in a population
2.1. ELEMENTARY PROBABILITY; RANDOM SAMPLING
5
of organisms has a genotype. For a simpler example, standard in elementary probability texts, imagine a bag of marbles each of which is either red, yellow, or blue.
If x is a label, the notation fx shall denote the frequency of x in the population:
4
fx =
number of individuals in S with label x
.
|S|
Because random sampling is modeled by a uniform probability measure,
P (randomly selected individual bears label x) = fx .
(2.6)
This is simple, but is basic in population genetics.
Example 2.2. Random mating; Part I. One assumption behind the textbook analysis
of Mendel’s pea experiments—see Chapter 1—is that each individual chosen for a
random cross is sampled randomly from the population. Consider a population of N
pea plants, and suppose k of them have the genotype GG and ` of them genotype Y Y
for pea color. Thus N − k − ` plants have the heterozygote genotype Y G. Suppose
the first parent for a cross is chosen by random sampling from the population. What
is the probability it is heterozygotic? According to (2.1),
N −k−`
P Y G plant is selected = fY G =
.
N
(If we care only about the genotype of the sampled plant, we could look at this
experiment as having one of three possible outcomes, GG, Y Y , or Y G, with respective probabilities fGG = k/N , fY Y = `/N and fY G = 1 − fGG − fY Y . This yields a
more compact probability space, but we had to know the random sampling model
to derive it. The random sampling model is better to use because it is more basic.)
.
In many applications, populations are sampled repeatedly. For example, statisticians will sample a large population multiple times to obtain data for estimating
its structure. Repeated random sampling can occur without replacement, in which
case the each sampled individual is removed from the population or with replacement, in which case each sampled individual is returned to the population and may
be sampled again. In this text, random sampling will always mean sampling with
replacement, unless otherwise specified.
Consider sampling n times from S. Each possible outcome may be represented
as a sequence (s1 , . . . , sn ) of elements from S and, in the case of sampling with
replacement, every such sequence is a possible outcome. Therefore, the outcome
space Ω is the set of all such sequences. This is denoted by Ω = S n and is called
the n-fold product of S. Because, there are |S| possible values each of the n entries
can take, |S n | = |S|n . The idea of random sampling is that no individual of the
population is more likely than any other to be selected. In particular, each sequence
6
CHAPTER 2. PROBABILITY THEORY
of n samples should be equally likely. This means P({(s1 , . . . , sn )}) = 1/|S|n for
every sequence, and hence,
P(V ) =
|V |
, for any subset V of S n .
|S|n
(2.7)
In short, the probability model for a random sample of size n is the uniform probability measure on S n .
Example 2.2, part II. The standard model for a random cross in Mendel’s pea
experiment assumes the plants to be crossed are chosen by a random sample of
size 2. (Since this is random sampling with replacement, it is possible that a plant
is crossed with itself.) In the set up of the part I, what is the probability that a GG
genotype is crossed with a Y Y genotype?
In this case, the outcome space is the set of all ordered pairs (s1 , s2 ) where s1
and s2 belong to the population of plants. Let U be the event that one of these has
genotype GG and the other genotype Y Y . Since there are k plants in the population
of type GG and ` of type Y Y , there are k · ` pairs (s1 , s2 ) in which s1 is type GG
and s2 is type Y Y . Likewise there are ` · k pairs in which s1 is type Y Y and s2 is
type GG. It follows that |U | = 2k` and
P(U ) =
2k`
= 2fGG fY Y .
N2
Example 2.2, part III. Suppose we take a random sample of 10 plants from a population of N of Mendel’s peas, of which, as in Part I of this example, k have genotype
GG. What is the probability that exactly 6 of those sampled plants have genotype
GG? The event V in this case is the set of sequences of 10 plants from the popula
tion for which a GG genotypes appears exactly 6 times. Now, there are 10
6 ways
the 6 positions at which GG types occur can be distributed among the 10 samples.
For each such arrangement of these 6 positions, there are n choices of GG plants to
place in each of these positions, and N − k choices
of non-GG plants to place in the
10 6
remaining 10−6 = 4 positions. Thus, |V | = 6 k (N − k)4 . By (2.7),
P(V ) =
10 6
6 k (N −
N 10
k)4
=
10
6
!
k
N
6 k
1−
N
!
4
=
10 6
f (1 − fGG )4 .
6 GG
(This problem is not so interesting biologically, but it serves as a review of binomial
probabilities!)
By generalizing the argument of the last example, the probability that individuals labeled x appear m times in a random sample of size n is
!
n m
f (1 − fx )n−m .
m x
(2.8)
2.1. ELEMENTARY PROBABILITY; RANDOM SAMPLING
7
Our model for sampling n times from S is consistent with our model for a
single sample, in the following sense. For any k, where 1 ≤ k ≤ n, the k th sample,
considered by itself, is a single random sample from S. That is, if s is any individual
in the population, the probability that s is selected in the k th draw is 1/|S|. This
is true because the number of sequences of length n in which a given s appears in
position k is |S|n−1 , and so the probability of s on draw k equals |S|n−1 /|S|n = 1/|S|.
2.1.3
Independence
Two events U and V in a probability space are said to be independent if
P(U ∩ V ) = P(U )P(V ).
(2.9)
This property is called ‘independence’ because of the theory of conditional probability. When P(V ) > 0, the conditional probability of U given V is defined as
4
P (U |V ) =
P(U ∩ V )
,
P(V )
(2.10)
and it represents the probability U occurs given that V has occurred. (Conditional
probability will be reviewed in more depth presently.) If U and V are independent
and both have positive probability, then P(U |V ) = [P(U )P(V )]/P(V ) = P(U ), and
likewise and P(V |U ) = P(V ); thus the probability of neither U nor V is affected by
conditioning on the other event, and this is the sense in which they are independent.
Three events U1 , U2 , U3 are said to be independent if they are pairwise independent—
that is Ui and Uj are independent whenever i 6= j— and, in addition
P(U1 ∩ U2 ∩ U3 ) = P(U1 )P(U2 )P(U3 ).
(2.11)
The reason for the last condition is that we want independence of the three events
to mean that any one event is independent of any combinations of the other events,
for example, that U1 is independent of U2 , of U3 , of U2 ∩ U3 , U2 ∪ U3 , etc. Pairwise
independence is not enough to guarantee this, as the reader will discover by doing
Exercise 2.4. However, adding condition (2.11) is enough—see Exercise 2.5.
The generalization to more than three events is straightforward. Events U1 , . . . , Un
are independent if
P(Ur1 ∩ · · · ∩ Urk ) = P(Ur1 ) · · · P(Urk )
(2.12)
for every every k, 2 ≤ k ≤ n, and every possible subsequence 1 ≤ r1 < r2 <
· · · < rk ≤ n. This condition will imply, for example, that event Ui is independent
of any combination of events Uj for j 6= i.
The next result states a very important fact about repeated random sampling.
Proposition 1 In random sampling (with replacement) the outcomes of the different samples are independent of one another.
8
CHAPTER 2. PROBABILITY THEORY
What this means is the following. Let B1 , . . . , Bn be subsets of the population,
S. Let s1 denote the result of the first sample, s2 the result of the second sample,
etc. Then
P {s1 ∈ B1 } ∩ {s2 ∈ B2 } ∩ · · · ∩ {sn ∈ Bn }
= P {s1 ∈ B1 } P {s2 ∈ B2 } · · · P {sn ∈ Bn }
B1 B2 Bn = · · · · .
(2.13)
S S S This proposition will be the basis of generalizing random sampling to random variables in Section 2.3.
The proof is a calculation. We will do a special case only, because then the notation is simpler and no new ideas are required for the general case. The calculation
is really just a generalization of the one done in Example 2.2, Part II. Consider a
random sample of size 3 from S and let A, B, and C be three subsets of S and let
V be the set of all sequences of samples (s1 , s2 , s3 ) such that s1 ∈ A, s2 ∈ B, and
s3 ∈ C for all other k is anything in S; thus, V = {s1 ∈ A} ∩ {s2 ∈ B} ∩ {s3 ∈ C}.
Clearly |V | = |A| · |B| · |C|. Hence
P(V ) =
2.1.4
|A| · |B| · |V |
|S|
3
=
|A| |B| |C|
·
·
.
|S| |S| |S|
The law of large numbers for random sampling
Large numbers laws equate theoretical probabilities to observed, limiting frequencies, as the number of repetitions of an experiment increases to infinity. This section
presents large numbers laws specifically for random sampling. A more complete and
general treatment appears later in the chapter using random variables.
Let S be a finite population, and imagine sampling from S (with replacement)
an infinite number of times, so that each sample is independent from all others. If
n is any positive integer, the first n samples from this sequence will be a random
sample from S of size n, as defined in the previous section. Let A be a subset of S.
Define
number of sampled values in A among the first n samples
.
f (n) (A) =
n
This quantity is random, because it depends on the actual experimental outcome,
and so is called the empirical frequency of A in the first n trials.
Theorem 1 ( Strong Law of Large Numbers for random sampling.) For a sequence
of independent random samples,
lim f (n) (A) =
n→∞
|A|
= P(A), with probability one.
|S|
(2.14)
2.1. ELEMENTARY PROBABILITY; RANDOM SAMPLING
9
The proof, which is not elementary, is omitted. This theorem gives an interpretation of what it means to assign a probability to an event: it is the frequency
with which that event occurs in the limit of an infinite sequence of independent
trials. (Not all probabilists and statisticians agree with this interpretation, but this
disagreement does not affect the truth of the theorem.)
(n)
If x is a label, and if fx is the empirical frequency of individuals with label x
in the first n samples, then Theorem 1 says,
lim f (n)
n→∞ x
= fx .
There is also a weak law of large numbers, which does have an elementary proof.
Chebyshev’s inequality (treated later in this Chapter) implies that for any a > 0,
f (A)(1 − f (A))
1
P f (n) (A)−f (A) > a ≤
≤
.
na2
4na2
(2.15)
(The last inequality is a consequence of the fact that if 0 ≤ p ≤ 1, p(1 − p) ≤ 1/4.)
It follows that
(n)
lim P f (A)−f (A) > a = 0.
(2.16)
n→∞
This is the weak law. It can be derived from the strong law, but the derivation
via Chebyshev’s inequality gives useful quantitative bounds on the probabilities
of deviations of f (n) (A) from its theoretical limit, f (A). A more sophisticated
argument, using moment generating functions and Markov’s inequality, leads to a
better bound than (2.15) called Chernoff’s inequality:
2
P(|f (n) (A) − f (A)| > a < 2e−2na .
(2.17)
The law of large numbers motivates an important approximation used in population genetics models. Consider repeated random sampling from a population of
organisms in which the frequency of some genotype, call it L, is fL . By the weak
law of large number, the frequency of L in a large sample will be close to fL with
high probability. Therefore, when studying large samples from a population, we
can replace the empirical frequency, which is random, by fL , which is not, and obtain a good approximate model. This approximation is at the heart of the infinite
population models studied in Chapter 3.
2.1.5
Conditioning, Rule of total probabilities, Bayes’ rule
Let U and V be two events, with P(V ) > 0. Recall that the conditional probability
of U given V is defined as
4 P(U ∩ V )
,
P (U |V ) =
P(V )
10
CHAPTER 2. PROBABILITY THEORY
and interpreted as the probability U occurs knowing that V has occurred. Conditional probabilities are often used for modeling and for calculation, when events are
not independent. A very important tool for these applications is the rule of total
probabilities.
Proposition 2 Let the events V1 , · · · , Vn a form a disjoint partition of the outcome
space Ω, that is, Ω = V1 ∪ · · · Vn and Vi ∩ Vj = ∅ if i 6= j. Assume that P(Vi ) > 0
for all i. Then for any event U ,
P(U ) = P(U |V1 )P(V1 ) + P(U |V2 )P(V2 ) + · · · + P(U |Vn )P(Vn ).
(2.18)
This is easy to derive. For any i, the definition of conditional probability implies
P(U ∩ Vi ) = P(U |Vi )P(Vi ).
(2.19)
Since V1 , . . . , Vn form a disjoint partition of Ω, U = [U ∩ V1 ]∪[U ∩ V2 ]∪· · ·∪[U ∩ Vn ],
and it follows from axiom (P3) for probability spaces that
P(U ) = P(U ∩ V1 ) + · · · + P(U ∩ Vn )
= P(U |V1 )P(V1 ) + · · · + P(U |Vn )P(Vn ).
Example 2.3. In a cross of Mendel’s pea plants, each parent plant contributes one
allele at each locus. Thus a cross of a GG with another GG produces a GG offspring
and the cross of a GG with a Y Y produces a GY offspring. What happens if a parent
is Y G? In this case, it is assumed that the parent passes on each allele with equal
probability, so it contributes Y with probability 0.5 and G with probability 0.5.
This can be expressed as a statement about conditional probabilities:
P(G|GY ) = P(Y |GY ) = 0.5.
where here P(G|GY ) is shorthand for probability that a parent contributes G to an
offspring given that it has genotype GY . Now suppose a plant is selected at random
from a population and used to fertilize another plant. What is the probability this
selected plant contributes allele G to the offspring? Let G represent the event that
it contributes G. Let GG, GY , and Y Y represent the event that the selected plant
has genotype GG, GY , or Y Y , respectively, and recall that random sampling means
P(GG) = fGG , P(GY ) = fGY , P(Y Y ) = fY Y . Then clearly, P(G|GG) = 1 and
P(G|Y Y ) = 0. By the rule of total probabilities,
P(G) = P(G|GG)P(GG) + P(G|GY )P(GY ) + P(G|Y Y )P(Y Y ) = fGG + (0.5)fGY ,
is the probability the randomly selected parent passes G to an offspring.
2.1. ELEMENTARY PROBABILITY; RANDOM SAMPLING
11
The rule of total probabilities generalizes to conditional probabilities. Again, let
V1 , . . . , Vn be a disjoint partition of Ω. Assume P(W ) > 0. Then
P(U |W ) =
n
X
P (U |Vi ∩ W ) P(Vi |W ).
(2.20)
i=1
The proof is very similar to that for the ordinary rule of total probabilities. Write
out all conditional expectations using the definition (2.10) and use the fact that
P
P(U ∩W ) = n
i=1 P(U ∩Vi ∩W ). The details are left as an exercise. This conditioned
version of the total probability rule will be used for analyzing Markov chains in
Chapter 4.
Another important formula using conditional probabilities is Bayes’ rule. By
the total probability rule, P(U ) = P(U |V )P(V ) + P(U |V c )P(V c ), where V c is the
complement V c = Ω − V of V . Also, P(U ∩ V ) = P(U |V )P(V ). Combining these
formulae gives Bayes’ rule:
P(V |U ) =
P(U |V )P(V )
P(U ∩ V )
=
.
P(U )
P(U |V )P(V ) + P(U |V c )P(V c )
(2.21)
Example 2.4. Suppose, in the previous example, that the randomly chosen parent
plant contributes G? What is the probability that the parent was GY ? Keeping
the same notation, this problem asks for P(GY |G), and by Bayes’ rule it is
P(GY |G) =
2.1.6
P(G|GY )P(GY )
(0.5)fGY
fGY
=
=
.
P(G)
fGG + (0.5)fGY
2fGG + fGY
Exercises.
2.1. Population I ({k1 , . . . , kN }) and population II ({`1 , . . . , `M }) are, respectively,
populations of male and female gametes of Mendel’s peas. Alice intends to perform a
cross between the two populations. Assume she randomly chooses a pair of gametes,
one from population I and one from II. The outcome space of this experiment is the
set of pairs (ki , `j ), where 1 ≤ i ≤ N and 1 ≤ j ≤ M . Assume that all pairs are
equally likely to be chosen.
a) Let Ui be the event that the gamete from population I is ki . Explicitly identify
this event as a subset of the outcome space and determine its probability. Let
Vj be the event the gamete from population II is `j . Show Ui and Vj are
independent.
b) Assume that r of the gametes in population I and s of the gametes in population II carry the allele G for green peas and that the remainder carry allele Y .
What is the probability that both gametes selected have genotype G? What
is the probability that one of the selected gametes has genotype G and the
other has genotype Y ?
12
CHAPTER 2. PROBABILITY THEORY
2.2. If junk DNA (see Chapter 1) has no function, no selective pressure has acted
on it and all possible sequences should be equally likely in a population, because
it mutates randomly as time passes. Therefore if a stretch. Thus, a hypothetical
model for a randomly selected piece of junk DNA that is N base pairs long is the
uniform distribution on all DNA sequences of length N . This is equivalent to a
random sample of size N from the population of DNA letters {A, T, G, C}. (This
model is a preview of the independent sites model with equal base probabilities—see
Chapter 6.) Assume this model to do the following problem.
A four bp-long DNA segment is selected at random and sequenced. Find the
probability that the selected sequence contains exactly two adenine (A) bases.
2.3. Randomly select two four-base-long DNA segments x1 x2 x3 x4 and y1 y2 y3 y4 and
align them as follows: xy11 xy22 xy33 xy44 .
a) Assume that the selection of the x-sequence and the selection of the y-sequence
are independent, and that both are random samples of size 4 from the DNA
alphabet. Construct a probability space for the aligned pair of sequences.
b) What is the probability that both the x-sequence and the y-sequence begin
with A? What is the probability that x1 and y1 are equal? What is the
probability that an A is aligned with A exactly twice in the aligned pair of
sequences? What is the probability that x1 = y1 , x2 = y2 , x3 = y3 , x4 = y4 ?
2.4. Denoting heads by 1 and tails 0, the outcome space of four tosses of a coin is the
set of all sequences (η1 , η2 , η3 , η4 ) of 0’s and 1’s. The uniform probability measure
on this space models four independent tosses of a fair coin. Let U1 be the event that
the first two tosses are heads, U2 the event that the last two tosses are heads, and
U3 the event the second and third tosses produce one head and one tail in either
order. Show that these events are pairwise independent, but not independent.
2.5. Assume that U1 , U2 , U3 are independent. Show that U3 and U1 ∩ U2 are
independent and that U3 and U1 ∪ U2 are independent.
2.6. Three coins are in a box. Two are fair and one is loaded; when flipped, the
loaded coin comes up heads with probability 2/3. A coin is chosen by random
sampling from the box and flipped. What is the probability that it comes up heads?
Given that it comes up heads, what is the conditional probability that it is the
loaded coin?
2.7. Probability space model for sampling without replacement. Let S be a population of size N and let n ≤ N . The outcome space for a random sample of size n
drawn from S without replacement is the set Ω of sequences (s1 , s2 , . . . , sn ) in which
si ∈ S for all i and in which s1 , . . . , sn are all different. The probability model for
random sampling without replacement is the uniform probability measure on Ω.
What is the probability of any single sequence in Ω? If k individuals in S bear
the label x and ` ≤ k, what is the probability exactly ` individuals with label x
appear in the sample of size n?
2.2. RANDOM VARIABLES
2.2
13
Random Variables
Suppose you are tossing a coin. Label the outcome of the next toss by the variable X,
setting X = 1 if the toss is heads, X = 0 if tails. Then X is an example of a random
variable, a variable whose value is not fixed, but random. In general, a variable
Y that takes a random value in a set E is called an E-valued random variable. By
convention, the term random variable by itself, with no explicit mention of E, means
a random variable taking values in a subset of the real numbers. Random variables
that are not real-valued do appear in some applications in this text; for example,
random variables modeling the bases appearing along a DNA sequence take values
in the DNA alphabet {A, C, G, T }. It will always be clear from context when a
random variable is not real-valued.
In the previous section, probability spaces were used to model random phenomena. By thinking of outcomes as a random variables, we could instead construct
random variable models. The two approaches are really equivalent. In the random variables approach, we are just labeling the outcome by X and replacing the
outcome space, Ω, by the set, E, of possible values of X. But the language of random variables is actually much more convenient. It provides a better framework for
mathematical operations on outcomes, for defining expected values, and for stating
limit laws. And it makes complicated models, involving many interacting, random
component, easier to formulate, because one can just use a random variable for each
component.
In advanced probability theory, random variables are defined as functions on
probability spaces. Thus a random variable assigns to each possible outcome, ω,
some value X(ω), which might represent a special attribute of ω. This viewpoint
may be useful to keep in mind, but is not explicitly used in this text.
2.2.1
Probability Mass Functions, Independence, Random Samples
A random variable taking values in a finite set E = {s1 , . . . , sN }, or a countably
infinite set E = {s1 , s2 , . . .}, is said to be discrete. This section will review discrete
random variables exclusively.
When X takes values in the discrete space E, the function
4
pX (x) = P (X = x) ,
x in E.
(2.22)
is called the probability mass function of X. Modeling the outcome of a random
experiment as a discrete random variable is equivalent to specifying its probability
mass function. Once this is known, the additivity properties of probability determine
the outcome of any other event concerning X, because if U is any subset of S,
P ({X ∈ U }) =
X
x∈U
pX (x).
(2.23)
14
CHAPTER 2. PROBABILITY THEORY
(The notation on the right-hand-side means that the sum is taken over all x in U .)
P
Note in particular that 1 = P(X ∈ E) = x∈E pX (x).
In general, any function p on a discrete set E that satisfies 0 ≤ p(x) ≤ 1, for each
P
x ∈ E, and x∈E p(x) = 1, is called a probability mass function and is a potential
model for an E-valued random variable.
Example 2.5. Bernoulli random variables. X is a Bernoulli random variable with
parameter p if X takes values in {0, 1} and
pX (0) = 1 − p,
pX (1) = p.
(2.24)
For later reference, it is useful to note that the definition (2.24) can be written in
functional form as
pX (s) = ps (1 − p)1−s
for s in the set {0, 1}.
(2.25)
The Bernoulli random variable is the coin toss random variable. By convention,
the outcome 1 usually stands for head and 0 for tails. It is common also to think
of Bernoulli random variables as modeling trials that can result either in success
(X = 1) or failure (X = 0). In this case the parameter p is the probability of
success. We shall often use the language of success/failure trials when discussing
Bernoulli random variables.
The term Bernoulli random variable is also used for any random variable taking
on only two possible values s0 and s1 , even if these two values differ from 0 and 1.
We shall indicate explicitly when this is the case. Otherwise s0 = 0 and s1 = 1 are
the default values.
Example 2.6. Uniform random variables. Let S be a finite set, which may be
thought of as a population. An S-valued random variable X is said to be uniformly
distributed on S if its probability mass function is pX (s) = 1/|S| for all s ∈ S. Obviously, X then models a single random sample, as defined in the previous section,
in the language of random variables.
Usually, we want to analyze the outcomes of several random trials at once; often
they simply represent repetitions of the same experiment. The overall outcome can
then be described as a sequence X1 , . . . , Xn of random variables. Now however,
prescribing the probability mass function of each random variable separately is not
enough to define a model. It is necessary instead to specify their joint probability
mass function:
4
pX1 ···Xn (x1 , . . . , xn ) = P (X1 = x1 , . . . , Xn = xn ) ,
x1 , . . . , xn ∈ E
(2.26)
2.2. RANDOM VARIABLES
15
(We assume here that all the random variables take values in the same discrete set
E.) The joint probability mass function determines the probabilities of any event
involving X1 , . . . , Xn jointly. If V is any subset of E × · · · × E (the n-fold product),
P (X1 , . . . , Xn ) ∈ V =
X
pX1 ···Xn (x1 , . . . , xn ).
(x1 ,...,xn )∈V
In particular, the individual distributions of the Xi are the so-called marginals of p:
pXi (x) =
X
pX1 ···Xn (x1 , . . . , xn ),
{(x1 ,...,xn ); xi =x}
where the sum is over all possible sequences in which the ith term is held fixed at
P
xi = x. For example, when n = 2, pX1 (x) = x2 ∈E pX1 ,X2 (x, x2 ).
A set of random variables X1 , . . . , Xn with values in E is said to be independent
if
• the events {X1 ∈ U1 }, {X2 ∈ U2 }, . . . , {Xn ∈ Un } are independent for every
choice of subsets U1 , U2 , . . . , Un of E.
This situation occurs very commonly in applied and statistical modeling. When it
does, the joint distribution of X1 , . . . , Xn is particularly simple. If x1 , . . . , xn are
elements in E, the events {X1 = x1 }, . . . , {Xn = xn } are independent, and hence
pX1 ···Xn (x1 , · · · , xn ) = P (X1 = x1 , . . . , Xn = xn )
= P (X1 = x1 ) P (X2 = x2 ) · · · P (Xn = xn )
= pX1 (x1 )pX2 (x2 ) · · · pXn (xn )
(2.27)
for any choice of x1 , . . . , xn in E. In words, the joint probability mass function is
the product of the probability mass functions of the individual random variables.
In fact, this condition characterizes independence.
Theorem 2 The set X1 , . . . , Xn of discrete random variables with values in a discrete space E is independent if and only if (2.27) is true for all choices x1 , . . . , xn
of values from E.
We know already that independence implies (2.27), so to prove Theorem 2 requires showing the converse. The reader should prove the case n = 2 as an exercise;
once this case n = 2 is understood, the general case can be proved by induction on
n.
Example 2.7. Independent Bernoulli random variables. Suppose we flip a coin n
times. Assume the flips are independent and p is the probability of heads. If heads
is represented by 1 and tails by 0, we can model this as a sequence X1 , . . . , Xn of
16
CHAPTER 2. PROBABILITY THEORY
independent Bernoulli random variables, each with parameter p. Using (2.25) the
joint distribution is
P (X1 = s1 , . . . , Xn = sn ) = ps1 (1 − p)1−s1 · · · psn (1 − p)1−sn
= ps1 +···+sn (1 − p)n−(s1 +···+sn ) ,
for any sequence (s1 , . . . , sn ) of 0’s and 1’s.
(2.28)
Examples, such as the last one, featuring independent random variables all sharing a common probability mass function are so important that they get a special
name.
Definition. The random variables in a finite or infinite sequence are said to be
independent and identically distributed, abbreviated i.i.d., if they are independent and all have the same probability mass function.
As an example, suppose S is a finite population, and let X1 , . . . , Xn be independent, each uniformly distributed on S. It follows from using (2.27) that, every
sequence of (s1 , . . . , sn ) of possible values of X1 , . . . , Xn is equally likely. Therefore,
X1 , . . . , Xn is a model, expressed in the language random variables, for a random
sample of size n from S, as defined in Section 2.1. This is a very important point.
I.i.d. sequences of random variables generalize random sampling to possibly nonuniform distributions. Indeed, many texts adopt the following terminology.
Definition. Let p be a probability mass function on a discrete set S. A random
sample of size n from distribution p is a sequence X1 , . . . , Xn of independent
random variables, each having probability mass function p.
To avoid confusion with random sampling as we have originally defined it, that is,
with respect to a uniform measure, we shall generally stick to the i.i.d. terminology
when p is not uniform.
Example 2.8. I.i.d. site model for DNA. Let X1 , . . . , Xn denote the successive bases
appearing in a sequence of randomly selected DNA. The i.i.d. site model assumes
that X1 , . . . , Xn are i.i.d. The i.i.d. site model with equal base probabilities imposes
the additional assumption that each Xi is uniformly distributed on {A, G, C, T }.
This latter model is the same as a random sample of size n from the DNA alphabet.
It coincides with the probability space model for junk DNA introduced in Exercise
2.2.
2.2. RANDOM VARIABLES
2.2.2
17
The basic discrete random variables
We have defined already discrete Bernoulli and uniform random variables. There
are several other important types repeatedly used in probabilistic modeling.
Binomial random variables. Let n be a positive integer and let p be a number in
[0, 1]. A random variable Y has the binomial distribution with parameters n, p
if Y takes values in the set of integers {0, 1, . . . , n} and has the probability mass
function
!
n k
p(k) =
p (1 − p)n−k , for k in {0, 1, . . . , n}.
(2.29)
k
The binomial distribution is the probability model for the number of successes in
n independent trials, where the probability of success in each trial is p. To see this,
let X1 , X2 , . . . , Xn be i.i.d. Bernoulli random variables with p = P(Xi = 1); then, the
P
event Xi = 1 represents a success on trial i, and the sum n1 Xi counts the number
of successes. If (s1 , . . . , sn ) is a sequence of 0’s and 1’s in which 1 appears exactly k
times, then we know from (2.28) that
P (X1 , . . . , Xn ) = (s1 , . . . , sn ) = pk (1 − p)n−k
There are nk such sequences, because each such sequence corresponds to a choice
of k positions among n at which 1 appears. Thus
P
n
X
!
Xi = k =
1
n k
p (1−p)k .
k
Geometric random variables. A random variable Y has the geometric distribution with parameter p, where 0 < p < 1, if the possible values of Y are the positive
integers {1, 2, . . .}, and
k−1
P (Y = k) = (1 − p)
p
k = 1, 2, · · ·
(2.30)
Like the binomial random variable, the geometric random variable has an interpretation in terms of success/failure trials or coin tossing. Let X1 , X2 , . . . be an
infinite sequence of independent Bernoulli random variables, each with parameter
p. The geometric random variable with parameter p models the time of the first
success in such a sequence. Indeed, the first success occurs in trial k if and only
if X1 = 0, X2 = 0, . . . , Xk−1 = 0, Xk = 1. By independence, this occurs with
probability
P (X1 = 0) · · · P (Xk−1 = 0) · P (Xk = 1) = (1 − p)k−1 p,
which is exactly the expression in (2.30).
18
CHAPTER 2. PROBABILITY THEORY
The Bernoulli trial interpretation of geometric random variables can simplify
calculations. Suppose we want to compute P(Y > j) for a geometric random variable.
P
P∞
k−1 p. But
This probability is the infinite sum ∞
k=j+1 P(Y = k) =
k=j+1 (1 − p)
Y > j is equivalent to the event that there are no successes in the first j independent
Bernoulli trials, and, as the trials are independent and the probability of failure is
1 − p,
P(Y > j) = (1 − p)j ,
(2.31)
without having to do a sum. Of course, the sum is not hard to do using the formula
for summing a geometric series; the reader should show directly that the series
P∞
k−1 p equals (1 − p)j .
k=j+1 (1 − p)
Geometric random variables have an interesting property, called the memoryless
property, which follows easily from (2.31). If X is geometric with parameter p,
P (X > k + j|X > k) =
(1 − p)k+j
P(X > k+j)
=
= (1 − p)j = P(X > j) (2.32)
P(X > k)
(1 − p)j
To understand what this says, imagine repeatedly playing independent games, each
of which you win with probability p and lose with probability 1−p. Let X be the
first trial which you win; it is a geometric random variable. Now suppose you have
played k times without success (X > k), and you want to know the conditional
probability of waiting at least j additional trials before you win. Property (2.32)
says that the X has no memory of the record of losses; the conditional probability
of waiting an additional j trials for a success, given that you have lost the first k
trials, is the same the probability of waiting for at least j trials in a game that starts
from scratch. This may sound odd at first, but it is an immediate consequence of
the independence of the plays.
Remark. Some authors define the geometric random variable to take values in the
natural numbers 0, 1, 2, . . . with probability mass function P(X = k) = (1 − p)k p,
k ≥ 0.
Poisson random variables. A random variable Z has the Poisson random distribution with parameter λ > 0, if the possible values of Z are the natural numbers
0, 1, 2, . . . and
λk −λ
P (Z = k) =
e ,
k = 0, 1, 2, . . .
(2.33)
k!
Poisson random variables arise often in applications as limits of binomials, when the
number of trials is large but the probability of success per trial is small. This will
be explained in Chapter 5.
Multinomial distributions. (Multinomial random variables appear mainly in
the discussion of the coalescent in Chapter 4.) We start with an example. A box
contains marbles of three colors, red, green, and blue. Let p1 be the probability
2.2. RANDOM VARIABLES
19
of drawing a red, p2 the probability of drawing a green, and p3 the probability
of drawing a blue. Of course, p1 + p2 + p3 = 1. Sample the box n times with
replacement, assuming all samples are independent of one another, and let Y1 be
the number of red marbles, Y2 the number of greens, and Y3 the number of blues in
the sample. The random variables Y1 , Y2 , Y3 each have a binomial distribution, but
they are not independent—indeed, they must satisfy Y1 + Y2 + Y3 = n—and so we
cannot use (2.27) to compute their joint distribution. However the individual draws
are independent. Let the result Xi of draw i be r, g, or b, according to the color of
the marble drawn. If (s1 , . . . , sn ) is a specific sequence consisting of letters from the
set {r, b, y}, and if this sequence contains k1 letter r’s, k2 letter g’s, and k3 letter
b’s,
P (X1 , . . . , Xn ) = ((s1 , . . . , sn ) = P(X1 = s1 ) · · · P(Xn = sn ) = pk11 pk22 pk33 .
(2.34)
n!
different sequences of sequences of
k1 !k2 !k3 !
length n with k1 red, k2 green, and k3 blue marbles. Thus,
On the other hand, there are a total of
P (Y1 = k1 , Y2 = k2 , Y3 = k3 ) =
n!
pk 1 pk 2 pk 3 ,
k1 !k2 !k3 ! 1 2 3
(2.35)
for any non-negative integers k1 , k2 , k3 such that k1 + k2 + k3 = n.
The general multinomial distribution is defined by a generalization of formula
(2.35). To state it, recall the general notation,
n
k1 · · · kr
!
4
=
n!
.
k1 ! · · · kr !
Fix two positive integers n and r with 0 < r < n. Suppose that for each index
i, 1 ≤ i ≤ r, a probability pi is given satisfying 0 < pi < 1, and assume also that
p1 +· · ·+pn = 1. The random vector Z = (Y1 , · · · , Yr ) is said to have the multinomial
distribution with parameters (n, r, p1 , . . . , pr ) if
!
P (Y1 = k1 , . . . , Yr = kr ) =
n
pk1 · · · pkr r ,
k1 · · · kr 1
(2.36)
for any sequence of non-negative integers k1 , . . . , kr such that k1 + · · · + kr = n.
The interpretation of the multinomial distribution is just a generalization of
the experiment with three marbles. Suppose a random experiment with r possible
outcomes 1, . . . , r is repeated independently n times. If, for each i, 1 ≤ i ≤ r, pi is
the probability that i occurs and Yi equals to the number of times outcome i occurs,
then (Y1 , . . . , Yr ) has the multinomial distribution with parameters (n, r, p1 , . . . , pr ).
20
CHAPTER 2. PROBABILITY THEORY
2.2.3
Continuous random variables.
A function f defined on the real numbers is called a probability density function if
f (x) ≥ 0 for all x and
Z ∞
−∞
f (x) dx = P(X < ∞) = 1.
(2.37)
Given such an f , we say that X is a continuous random variable with probability
density f if
P(a ≤ X ≤ b) =
Z b
f (x) dx,
for any a ≤ b.
(2.38)
a
If X is a continuous random variable, we often write fX to denote its density. Modeling a continuous random variable means specifying its probability density function.
Using principle (2.4) about limits of increasing events, we can extend (2.38) to
the case in which either of a or b is infinite. Thus,
Z b
P(X ≤ b) = lim P(a ≤ X ≤ b) = lim
a↓−∞
a↓−∞ a
Similarly,
P(X ≥ a) =
Z b
fX (x) dx =
−∞
fX (x) dx
(2.39)
Z ∞
fX (x) dx
(2.40)
a
Taking b → ∞ in (2.39) yields P(−∞ < X < ∞) =
Z ∞
−∞
fX (x) dx = 1, as it should
be, and this is the reason for imposing condition (2.37) in the definition of a probability density function.
The range of a continuous random variable is truly an uncountable
continuum
R
of values. Indeed, if b is any single point, (2.38) implies P(X = b) = bb fX (x) dx = 0.
It follows that if S = {s1 , s2 , s3 , . . .} is any discrete subset of real numbers,
P(X ∈ S) =
∞
X
P(X = si ) = 0
i=1
Thus the range of a continuous random variable cannot be reduced to any discrete
set.
Because P(X = c) = 0 for any c if X is a continuous random variable, P(a < X ≤
b), P(a ≤ X < b), and P(a < X < b) are all equal to P(a ≤ X ≤ b) and hence all
these probabilities are described by the basic formula (2.37).
Definite integrals of non-negative functions compute areas under curves. Thus
formula (2.38) says that P(a < X ≤ b) is the area of the region defined by the graph
y = fX (x) of the density function, the x-axis, and x = a and x = b. This viewpoint
is helpful in working with continuous random variables and probability densities. In
particular, if dx is interpreted as a ‘small’ positive number, and fX is continuous in
an interval about x,
P(x ≤ X ≤ x + dx) ≈ fX (x) dx,
2.2. RANDOM VARIABLES
21
because the region between y = f (x) and the x-axis from x to x+dx is approximately
a rectangle with height fX (x) and width dx. This is a useful heuristic in applying
intuition developed for discrete random variables to the continuous case. Many
formulas for discrete random variables translate to formulas for continuous random
variables by replacing pX (x) by fX (x) dx, and summation by integration.
So far, we have introduced separate concepts, the probability mass function and
the probability density function, to model discrete and continuous random variables.
There is a way to include them both in a common framework. For any random variable, discrete or continuous, define its cumulative (probability) distribution function
(c.d.f.) by
4
FX (x) = P (X ≤ x) .
(2.41)
Knowing FX , one can in principle compute P(X ∈ U ) for any set U . For example,
the probability that X falls in the interval (a, b] is
P (a < X ≤ b) = P (X ≤ b) − P (X ≤ a) = FX (b) − Fx (a).
(2.42)
The probability that X falls in the union of two disjoint intervals (a, b] ∪ (c, d] is
then P(X ∈ (a, b]) + P(X ∈ (c, d]) = FX (b) − FX (a) + Fx (d) − FX (a), and so on.
Since (a, b) is the union over all integers n of the intervals (a, b − 1/n],
P(a < X < b) = lim P(a < X ≤ b−1/n) = lim FX (b−1/n)−FX (a) = F (b−)−F (a),
n→∞
n→∞
(2.43)
where F (b−) = limx→b− F (x).
For a discrete random variable taking values in the finite or countable set S,
P
FX (b) − FX (a) = y∈S,a<y≤b pX (y). In particular, if s ∈ S, and pX (s) > 0, then
0 < pX (s) = P(X = s) = P(X ≤ s) − P(X < s) = FX (s) − FX (s−).
Thus the c.d.f. of X jumps precisely at the points of S and the size of the jump at
any s is the probability that X = s. In between jumps, FX is constant. Thus the
probability mass function of X can be recovered from FXZ.
x
If X is a continuous random variable, then FX (x) =
−∞
fX (y) dy. The funda-
mental theorem of calculus then implies
FX0 (x) = fX (x)
(2.44)
at any continuity point x of fX (x). Thus, we can also recover the density of continuous random variable from its c.d.f.
It is also possible to define cumulative distribution functions which are combinations of both jumps and differentiable parts, or which are continuous, but admit
no probability density. These are rarely encountered in applications.
22
2.2.4
CHAPTER 2. PROBABILITY THEORY
Basic continuous random variables
The uniform distribution. This is the simplest continuous distribution. Consider
a random variable X which can take on any value in the finite interval (α, β). It is
said to be uniformly distributed on (α, β) if its density function has the form
f (x) =
1
β−α ,
0,
if α < x < β;
otherwise.
If this is the case, X will have no preference as to its location in (α, β). Indeed, if
α ≤ a < b ≤ β,
Z b
b−a
1
dy =
,
P(a < X ≤ b) =
β
−
α
β
−α
a
and this answer depends only on the length of (a, b), not its position within (α, β).
Observe that the random variable which is uniform on (0, 1) is the random
variable version of the model in Example 2.1 c) for selecting a point uniformly at
random from (0, 1).
The exponential distribution. The exponential distribution is a popular model
for waiting times between randomly occurring events. It is the continuous analogue
of the geometric distribution. A random variable is said to have the exponential
distribution with parameter λ, where λ > 0, if its density is
f (x) =
λe−λx , if x > 0;
0,
otherwise.
Since the density function is 0 for x ≤ 0, an exponentially distributed random
variable can only take positive values. Thus, if X is exponentially distributed with
parameter λ, and if 0 ≤ a < b,
P(a < X ≤ b) =
Z b
λe−λx dx = e−λa − e−λb .
a
In particular, if a ≥ 0,
Z ∞
P(X > a) =
λe−λx dx = e−λa .
(2.45)
a
Conversely, suppose that P(X > a) = e−λa for a > 0. Then FX (a) = 1− P(X > a) =
1 − e−λa . By differentiating both sides and applying (2.44), fX (a) = FX0 (a) = λe−λa
for a > 0, and so X must be exponential with parameter λ. Thus, (2.45) is an
equivalent characterization of exponential random variables.
Exponential random variables have a memoryless property similar to the one for
geometric random variable. Indeed,from (2.45),
P (X > t + s|X > s) =
e−λ(t+s)
P(X > t + s)
=
= e−λt .
P(X > s)
e−λt
(2.46)
2.2. RANDOM VARIABLES
23
The normal distribution. A random variable Z is said to be normally distributed or Gaussian, if it has a probability density of the form
1
2
2
φ(x; µ, σ 2 ) = √ e−(x−µ) /2σ
σ 2π
− ∞ < x < ∞,
(2.47)
Here −∞ < µ < ∞ and σ 2 > 0. The parameters µ and σ 2 are respectively the mean
and variance of the random variable with density
φ(x; µ, σ 2 ); (mean and variance
√
are reviewed Rin section 2.3). The factor of σ 2π in the denominator of the density
∞
insures that −∞
φ(y; µ, σ 2 ) dy = 1, as required for a probability density function.
If µ = 0 and σ 2 = 1, then Z is said to be a standard normal random variable.
A conventional notation for the density of a standard normal r.v.. is
1
2
φ(x) = √ e−x /2 ,
2π
and a conventional notation for its associated cumulative distribution function is
4
Φ(z) = P(Z ≤ z) =
Z z
−∞
1
2
√ e−x /2 dx.
2π
Unlike the uniform and exponential distributions, Φ(z) admits no closed form expression in terms of elementary transcendental and algebraic functions. Therefore,
to compute probabilities of the form P(a < Z < b) = Φ(b) − Φ(a), where Z is standard normal, requires using either tables or a calculator or computer with a built in
normal distribution function.
The importance of the normal distribution function stems from the Central Limit
Theorem, which is stated below in Section 2.4.
It is useful to abbreviate the statement that X is a normal random variable with
mean m and variance σ 2 , by X ∼ N (µ, σ 2 ), and we shall use this notation below.
Normal random variables have a very important scaling property. If Z is a
standard normal r.v. then σZ + µ ∼ N (µ, σ 2 ). Conversely, if X ∼ N (µ, σ 2 ), then
(X −µ)/σ is standard normal. A proof of this fact follows shortly. First we show how
to use it to calculate probabilities for any normal random variable using tables of the
standard normal, starting with an example problem. Given that X be normal with
mean 1 and variance 4, find P(X ≤ 2.5). Towards this end, define Z = (X − µ)/σ =
(X −1)/2. Since the event X ≤ 2.5 is the same as Z ≤ (2.5−1)/2 , or, equivalently,
Z ≤ .75, and since Z is standard normal, P(X ≤ 2.5) = P(Z ≤ .75) = Φ(.75).
Tables for Φ show that Φ(.75) = 0.7734.
The general case is a simple extension of this argument. Let X be normal with
mean µ and variance σ 2 . For any a < b,
a<X<b
if and only if
X −µ
b−µ
a−µ
<
<
.
σ
σ
σ
24
CHAPTER 2. PROBABILITY THEORY
But (X − µ)/σ is a standard normal r.v. Hence
P (a < X < b) = Φ ((b − µ)/σ) − Φ ((a − µ)/σ) .
The proof that Z = (X − µ)/σ is a standard normal if X ∼ N (µ, σ 2 ) is an
exercise in manipulating integrals and the definition of a probability density. It is
necessary to show:
FZ (x) = P(Z ≤ x) =
Z x
−∞
1
2
√ e−y /2 dy.
2π
(2.48)
1
2
2
But, the density of X is φ(x; µ, σ) = √ e−(x−µ) /2σ . Thus
σ 2π
P(Z ≤ x) = P((X − µ)/σ ≤ x) = P (X ≤ σx + µ)
Z σx+µ
=
−∞
1
2
2
√ e−(y−µ) /2σ dy.
σ 2π
In the last integral, make the substitution, u = (y − µ)/σ. The result is
P(Z ≤ x) =
Z x
−∞
1
2
√ e−u /2 du,
2π
as required.
The Gamma distribution. A random variable is said to have the gamma distribution with parameters λ > 0 and r > 0 if its density is
f (x) =
−1
Γ (r)λr xr−1 e−λx ,
0,
if x > 0;
otherwise,
R
where Γ(r) = 0∞ xr−1 e−x dx. It can be shown by repeated integration by parts that
Γ(n) = (n − 1)! for positive integers n. Exponential random variables are gamma
random variables with r = 1.
2.2.5
Joint density functions
The notion of continuous random variable has a generalization to the multivariable
case.
Definition. Random variables X, Y are jointly continuous if there is a non-negative
function f (x, y), called the joint probability density of (X, Y ), such that
P (a1 < X ≤ b1 , a2 < Y ≤b2 ) =
for any a1 < b1 , and a2 < b2 .
Z b1 Z b2
f (x, y) dydx,
a1
a2
(2.49)
2.2. RANDOM VARIABLES
25
If (X, Y ) have joint density f , then in fact for a region U in the (x, y)-plane,
P ((X, Y ) ∈ U ) =
Z Z
f (x, y) dxdy.
(2.50)
U
In a rigorous treatment of probability this rule is derived as a consequence of (2.49).
Here we just state it as a fact that is valid for any region U such that the integral is
well-defined; this requires some technical restrictions on U , which are not necessary
to worry about for applications.
Example 2.8. Let (X, Y ) have joint density
f (x, y) =
1
2,
if 0 < x < 2 and 0 < y < 1;
otherwise.
0,
The density is zero except in the rectangle (0, 2) × (0, 1) = {(x, y) : 0 < x < 2, 0 <
y < 1}, so the probability that (X, Y ) falls outside this rectangle is 0.
Let U be the subset of (0, 2) × (0, 1) for which y > x, as in Figure 2.1.
y
1
6
U
-
2
x
Figure 2.1
The area of U is 1/2. Since the f has the constant value 1/2 over U , the double
integral of f over U is the 1/2 × area(U ) = 1/4. Thus
Z Z
P(Y > X) =
1
f (x, y) dxdy = .
4
U
The definition of joint continuity extends beyond the case of just two random
variables using multiple integrals of higher order; X1 , . . . , Xn are jointly continuous
with joint density f if
P ((X1 , . . . , Xn ) ∈ U ) =
Z
··
Z
f (x1 , . . . , xn ) dx1 · · · dxn .
U
Theorem 2 characterizing independence of discrete random variables generalizes
to the continuous case.
26
CHAPTER 2. PROBABILITY THEORY
Theorem 3 The jointly continuous random variables X1 , · · · , Xn are independent
if and only if their joint density function f factors as
f (x1 , . . . , xn ) = fX1 (x1 )fX2 (x2 ) · · · fXn (xn )
(2.51)
The density function of Example 2.8 is equal to f1 (x)f2 (y), where f1 (x) = 1/2
on (0, 2) and 0 elsewhere, and f2 (y) = 1 on (0, 1) and 0 elsewhere. The function
f1 is the density of a random variables uniformly distributed on (0, 2) and f2 the
density of a random variable uniformly distributed on (0, 1). Hence X and Y in
Example 2.8 are independent random variables, X being uniformly distributed on
(0, 2) and Y on (0, 1).
2.3
Expectation
In this section all random variables are real-valued.
2.3.1
Expectations; basic definitions
Definition. Let X be a discrete random variable with values in S, and let pX
denote its probability mass function. The expected value of X, also called the mean
of X, is
4 X
E[X] =
spX (s),
(2.52)
s∈S
if the sum exists. (When S is an infinite set, the sum defining E[X] is an infinite
series and may not converge.) We shall often use µX to denote E[X].
The expected value of X is an average of the possible values X can take on,
where each possible values s is weighted by the probability that X = s. Thus, the
expected value represents the value we expect X to have, on average. This is made
more precise in the discussion of the law of large numbers.
The expected value of a continuous random variable is again a weighted average
of its possible values. Following the heuristic discussed in Section 2.2, to arrive at
the appropriate definition we replace the probability mass function pX in formula
(2.52) by fX (x) dx and replace the sum by an expectation.
Definition Let X be a continuous random variable. Then,
4
Z ∞
E[X] =
−∞
xfX (x) dx,
(2.53)
if the integral exists.
Example 2.9. Let X be a Bernoulli random variable X with parameter p. Since
pX (0) = 1 − p and pX (1) = p, its expectation is
E[X] = 0 · (1 − p) + 1 · p = p.
(2.54)
2.3. EXPECTATION
27
This is intuitive; if you win a dollar for heads and win nothing for tails (a nice game
to play!), you expect to earn an average of p dollars per play.
Example 2.10. Probabilities as expectations. Let U be an event. The indicator of U
is the random variable,
1, if U occurs;
IU =
0, otherwise.
This is a Bernoulli random variable with p = P(IU = 1) = P(U ). Hence
E[IU ] = P(U ).
(2.55)
This trick of representing the probability of U by the expectation of its indicator is
used frequently.
Example 2.11. Let X be uniformly distributed on the interval (0, 1). Since the
density of X equals one on the interval (0, 1) and 0 elsewhere,
Z ∞
E[X] =
−∞
Z 1
fX (x) dx =
0
1
x dx = .
2
The answer makes perfect sense, because X shows no preference as to where it lies
in (0, 1), and so its average value should be 1/2.
Example 2.12. Let Y is exponential with parameter λ. An application of integration
by parts to compute the anti-derivative shows,
Z ∞
E[Y ] =
0
xλeλ x dx = −
∞
1
1
+ x e−λx = .
0
λ
λ
This gives a nice physical interpretation of the meaning of the parameter λ; if Y is
a waiting time, λ is the inverse of the expected time to wait.
Example 2.13. If X ∼ N (µ, σ 2 ), then E[X] = µ. This is easy to see if µ = 0 by the
symmetry of the density function. If µ 6= 0, then X −µ ∼ N (0, σ 2 ), so E[X −µ] = 0,
again showing E[X] = µ.
The expectations of the other basic discrete and continuous random variables
are listed in a table appearing in Section 2.3.4.
2.3.2
Elementary theory of expectations.
In probability, one repeatedly faces the following problem: given a random variable
X with known distribution and a function g, compute E[g(X)]. To compute E[g(X)]
directly from the definition requires first finding the probability mass function or
probability density of g(X), whichever is appropriate, and then applying (2.52) or
(2.53). But there is an easier way, sometimes called the law of the unconscious
statistician, presumably because it allows one to compute without thinking too
hard.
28
CHAPTER 2. PROBABILITY THEORY
Theorem 4 a) If X is discrete and E[g(X)] exists,
E[g(X)] =
X
g(s)pX (s).
(2.56)
g(x)fX (x) dx.
(2.57)
s∈S
b) If X is continuous and E[g(X)] exists,
Z ∞
E[g(X)] =
−∞
c) More generally,
E[h(X1 , . . . , Xn )] =
X
s1 ∈S
···
X
h(s1 , . . . , sn )pZ (s1 , . . . , sn ),
(2.58)
sn ∈S
if pZ is the joint probability mass function of (X1 , . . . , Xn ), and
Z ∞
E[h(X1 , . . . , Xn )] =
−∞
···
Z ∞
−∞
h(x1 , . . . , xn )f (x1 , · · · , xn ) dx1 . . . dxn ,
(2.59)
if f is the joint density function of X1 . . . . , Xn .
The law of the unconscious statistician has several important consequences that
are used repeatedly.
Theorem 5 (Linearity of expectation.) Assuming all expectations are defined
E [c1 X1 + · · · + cn Xn ] = c1 E[X1 ] + · · · + cn E[Xn ].
(2.60)
Linearity of expectations is extremely useful. When the probability mass function is the density of a random variable is complicated or difficult to compute exactly, it is hard to find its expectation directly from formula (2.52) or (2.53). But
if a random variable X can be represented as the sum of much simpler random
variables, then linearity can used to calculate its expectation easily, without finding
the probability mass function or density of X.
Example 2.14. The mean of a binomial r.v. Let X be binomial
with parameters n
Pn n k
and p. According to the definition in (2.52), E[X] = 0 k k p (1 − p)n−k , which
looks a bit complicated. However, we know that if Y1 , . . . , Yn are i.i.d. Bernoulli
with parameter p, then Y1 + · · · + Yn is binomial with parameters n and p. Since
E[Xi ] = p for each i,
E[X] = E[Y1 ] + · · · + E[Yn ] = np.
The linearity property has an important extension to integrals of random variables, which will be applied in Chapter 5. Suppose that for each point r in an
2.3. EXPECTATION
29
Z b
interval [a, b], Z(r) is a random variable. The integral, Z =
Z(r) dr defines a
a
new random variable. Integrals are limits of sums, and hence the linearity property of Theorem 5 extends to Z. A rigorous formulation and derivation of this fact
requires mathematical tools beyond the scope of this course. However, the informal derivation is easy to understand. Recall that the definite integral is a limit of
Riemann sums,
n
X
Z b
Z(r) dr = lim
n→∞
a
r0n
r1n
Z(rin )4n r,
j=1
rnn
where, for each n, a =
<
< ... <
= b is a partition of [a, b] into n equal
subintervals of length 4n r. By the linearity of expectations,

E
n
X

Z(rin )4n r =
n
X
E[Z(rin )]4n r,
(2.61)
j=1
j=1
for each Riemann sum. But the right hand side is itself a Riemann sum approximaZ b
tion of
E[Z(r)] dr. By letting n → ∞ in (2.61) and taking some liberties with
a
respect to the interchange of limit and expectation,
"Z
#
b
E
Z(r) dr
n
X
h
= E lim
n→∞
a
lim
Z(rin )4n r
i
j=1
n
X
n→∞
E[Z(rin )]4n r =
j=1
Z b
E[Z(r)] dr.
(2.62)
a
The next result, which also follows from the law of the unconscious statistician,
is a generalization to expectations of the probability formula P(U1 ∩ · · · ∩ Un ) =
P(U1 )P(U1 ) · · · P(Un ) for independent events.
Theorem 6 (Products of independent random variables.) Let X1 , . . . , Xn be independent random variables. Then
E [g1 (X1 )g(X2 ) · · · gn (Xn )] = E[g1 (X1 )] · · · E[g(Xn )]
(2.63)
whenever E[gi (Xi )] is defined and finite for each i.
We show why this theorem is true in the case n = 2 and X1 and X2 are independent and discrete. In this case Theorem 2 says that the joint probability mass
function of (X1 , X2 ) is pZ (s1 , s2 ) = pX1 (s1 )pX2 (s2 ).. Thus, using formula (2.58)
E [g1 (X1 )g2 (X2 )] =
X X
g1 (s1 )g2 (s2 )pX1 (s1 )pX2 (s2 )
s1 ∈S s2 ∈S
"
=
#"
X
g1 (s1 )pX1 (s1 )
s1
= E[g(X1 )]E[g2 (X2 )].
#
X
g1 (s2 )pX2 (s2 )
s2
30
CHAPTER 2. PROBABILITY THEORY
2.3.3
Variance and Covariance
The variance of a random X variable measures the average square distance of X
from its mean µX = E[X]:
4
h
i
Var(X) = E (X − µX )2 .
(2.64)
The size of the variance indicates how closely the outcomes of repeated, independent
trials of X cluster around its expected value.
There are several basic identities to keep in mind when working with the variance.
First
h
i
h
i
Var(cX) = E (cX − cµX )2 = c2 E (X − µX )2 = c2 Var(X).
(2.65)
Second, using linearity of expectations,
Var(X) = E[X 2 ] − 2E[µX X] + E[µ2x ] = E[X 2 ] − 2µX E[X] + µ2X
= E[X 2 ] − µ2X .
(2.66)
The last two steps in the derivation use the fact that µX is a constant, so that
E[XµX ] = µX E[X] = µ2X and E[µ2X ] = µ2X .
Example 2.15. If X
is Bernoulli with parameter p, Var(X) = E[X 2 ] − µ2X =
02 pX (0) + 12 pX (1) − p2 = p − p2 = p(1 − p).
Example 2.16. If Y ∼ N (µ, σ 2 ), then Var(Y ) = σ 2 . This can be derived using
moment generating functions—see Section 2.3.4.
The covariance of two random variables is:
4
Cov(X, Y ) = E [(X − µX )(Y − µY )] .
(2.67)
Similarly to (2.66), one can show that Cov(X, Y ) = E[XY ] − µX µY .
The covariance measures correlation between X and Y . For example if Y tends,
on average, to be greater than µY when X > µX , then Cov(X, Y ) > 0, whereas if Y
tends to be less than µY when X > µX , Cov(X, Y ) < 0. Officially the
p correlation between two random variables is defined as Cor(X, Y ) = Cov(X, Y )/ Var(X)Var(Y ).
Two random variables are uncorrelated if Cov(X, Y ) = 0. It is a very important fact that
if X and Y are independent then they are uncorrelated.
(2.68)
This is a consequence of the product formula of Theorem 6 for independent random
variables: if X and Y are independent then
Cov(X, Y ) = E [(X − µX )(Y − µy )] = E [X − µX ] E [Y − µY ] = 0,
2.3. EXPECTATION
31
because E[X − µX ] = µX − µX = 0.
There is also an important formula for the variance of a finite sum of random
P
P
variables. Let Y = n1 Xi . From linearity E[Y ] = n1 µi , where µi is short hand for
µXi . Then
2
(Y − µY ) =
!2
n
X
Xi − µi
=
n
X
1
X
(Xi − µi )2 +
1
(Xi − µi )(Xj − µj ).
1≤i,j≤n,i6=j
So taking expectations on both sides, and using the linearity property of expectation
and the definitions of variance and covariance,
Var(
n
X
Xi ) =
n
X
1
X
Var(Xi ) +
1
Cov(Xi , Xj ).
(2.69)
1≤i,j≤n,i6=j
If X1 , . . . , Xn are all uncorrelated, which is true if they are independent, it follows
that
Var(
n
X
Xi ) =
n
X
Var(Xi ).
(2.70)
1
1
This is a fundamental formula.
Example 2.17. Variance of the binomial. Let X1 , . . . , Xn be i.i.d. Bernoulli random
P
variables, each with probability p of equaling 1. Then we know that n1 Xi is
binomial with parameters n and p. We have also shown that Var(Xi ) = p(1 − p) for
a Bernoulli random variable. Therefore the variance of a binomial random variable
with parameters n and p is
Var(
n
X
1
2.3.4
Xi ) =
n
X
1
Var(Xi ) =
n
X
p(1 − p) = np(1 − p).
(2.71)
1
The Moment Generating Function.
Definition. The moment generating function of a random variable X is by
4
h
MX (t) = E etX
i
defined for all real numbers t such that the expectation is finite.
Notice that MX (0) is always defined, and MX (0) = E[e0 ] = E[1] = 1. MX (t) is
called the moment generating function because its derivative at t = 0 can be used
to compute the expectations E[X n ] for any positive integer power of X, and these
expecations are called the moments of X. In order to be able to do this it is only
32
CHAPTER 2. PROBABILITY THEORY
necessary to assume that MX (t) is finite for all values t in some interval containing
0. Then, because dn /dtn (etx ) = xn etx ,
dn
MX (t) =
dtn
dn h tX i
dn tX
E
e
e
=
E
dtn
dtn
h
i
= E X n etX .
Setting t = 0 gives
M (n) (0) = E[X n e0 ] = E[X n ],
(2.72)
(n)
where MX (t) denotes the derivative of order n of MX (t).
Example 2.18. Exponential random variables The moment generating function of
an exponential random variable with parameter λ is
tX
E[e
Z t
]=
λetx e−λx dx =
0
λ
,
λ−t
t < λ.
dn λ
λn!
=
. Hence, the nth moment of the
n
dt λ − t
(λ − t)n+1
exponential is E[X n ] = n!/λn .
By repeated differentiation,
Moment generating functions are particularly suited for studying sums of independent random variables because of Theorem 6. Assume X1 , . . . Xn are independent, and let Z = X1 + · · · + Xn . The identity etZ = etX1 +···+tXn = etX1 etX2 · · · etXn
is elementary. Now apply the product formula of Theorem 6.
h
i
MZ (t) = E etX1 etX2 · · · etXn = E[etX1 ]E[etX2 ] · · · E[etXn ] = MX1 (t) · · · MXn (t).
(2.73)
Thus, the moment generating function of a sum of independent random variables
is the product of the moment generating functions of the summands. In particular,
suppose the random variables X1 , . . . , Xn are i.i.d. Then they all have the same
moment generating function MX (t), and so
n
MZ (t) = E[etX1 ]E[etX2 ] · · · E[etXn ] = MX
(t).
(2.74)
Example 2.19. Bernoulli and binomial random variables. The moment generating
function of a Bernoulli random variable X with probability p of success is
M (t) = et·0 (1−p) + et p = 1 − p + pet .
Let Y = X1 + · · · + Xn , where X1 , . . . , Xn are i.i.d. Bernoulli random variables with
probability p of success. Then Y is a binomial random variable with parameters p
and n. Using (2.74) and the m.g.f. of the binomial r.v. , the m.g.f. of Y is
MY (t) = 1 − p + pet
n
.
(2.75)
2.3. EXPECTATION
33
Using MY (t) and formula (2.72) for computing moments, it is not hard to recover
the formulas we have already derived for the mean and variance of the binomial
random variable:
E[X] = M 0 (0) = n 1 − p + pet
n−1
pet |t=0 = np,
and
(2.76)
Var(X) = E[X 2 ] − µ2X = M 00 (0) − (np)2
= n(n − 1) 1 − p + pet
= np(1 − p)
n−2
(pet )2 + n 1 − p + pet
n−1
pet − (np)2 |t=0 .
(2.77)
Moment generating functions have another very important property: they characterize the cumulative probability distribution functions of random variables.
Theorem 7 Let X and Y be random variables and assume that there is an interval
(a, b), where a < b, such that MX (t) and MY (t) are finite and equal for all t in (a, b).
Then FX (x) = FY (x) for all x, where FX and FY are the respective cumulative
distribution functions of X and Y . In particular, if X and Y are discrete, they have
the same probability mass function, and if they are continuous, they have the same
probability density function.
Example 2.20. Sums of independent normal r.v.’s. The moment generating function
2 2
of a normal random variable with mean µ and variance σ 2 is M (t) = eµt+σ t /2 . We
will not demonstrate this here, only apply it as follows. Let X1 be normal with mean
µ1 and variance σ12 and let X2 be normal with mean µ2 and variance σ22 . Suppose
in addition that they are independent. Then, according to Theorem 6, the moment
generating function of X1 + X2 is
2 2 /2
MX1 +X2 (t) = MX1 (t)MX2 (t) = eµ1 t+σ1 t
2 2 /2
eµ1 t+σ1 t
2
2
2 /2
= e(µ1 +µ2 )t+(σ1 +σ2 )t
.
However, the last expression is the moment generating function of a normal random
variable with mean µ1 + µ2 and variance σ12 + σ22 . Thus, by Theorem 7, X1 + X2
must be normal with this mean and variance.
The last example illustrates a special case of a very important theorem.
Theorem 8 If X1 , . . . , Xn are independent normal random variables, then σ1n Xi is
P
P
normal with mean n1 µXi and variance n1 Var(Xi ).
The following table summarize the means, variances, and moment generating
functions of the basic random variables.
Table of means, variances, and moment generating functions.
34
CHAPTER 2. PROBABILITY THEORY
Distribution
Mean
Variance
M.g.f.
Bernoulli 0-1(p)
p
p(1 − p)
1 − p + pet
Binomial(n, p)
np
np(1 − p)
Poisson(λ)
λ
λ
e−λ+λe
1
p
α+β
2
1−p
p2
(β − α)2
12
pet
1−(1−p)et
etβ − etα
t(β − α)
Exponential(λ)
1
λ
1
λ2
λ
λ−t
Normal(µ, σ 2 )
µ
σ2
eµt+σ
Gamma(λ, r)
r
λ
r
λ2
λr
(λ − t)r
Geometric(p)
Uniform(α, β)
2.3.5
1−p+ pet
n
t
2 t2 /2
Chebyshev’s inequality and the Law of Large Numbers
Expectations and variances can be used to bound probabilities. The most basic
bound is called Markov’s inequality, which states that if X is a non-negative
random variable and a > 0, then
P (X ≥ a) ≤
E[X]
.
a
(2.78)
To show why this is true, we use two simple facts. First, if Y and Z are two random
variables such that Y ≥ Z with probability one, then E[Z] ≤ E[Y ]. The second fact
is stated in section 2.3.1, equation (2.55): P(U ) = E[IU ], where IU is the indicator
random variable for event U . Let U be the event {X ≥ a}. Then IU = 1 if U
occurs and 0 otherwise. We claim that IU ≤ a−1 X for all outcomes. Indeed, if
U = {X ≥ a} is the outcome, then X/a ≥ 1, while IU = 0; if U is not the outcome,
IU = 0, while X/a ≥ 0 because X is assumed to be always non-negative. It follows
then that
E[IU ] ≤ E[X/a].
The left-hand side is just P(U ) = P(X ≥ a) and the right-hand side is E[X]/a, and
so we obtain Markov’s inequality.
Chebyshev’s inequality is a consequence of Markov’s inequality. Let Y be a
random variable with finite mean µY and variance σ 2 = Var(X). Let X = (Y −µY )2 .
2.3. EXPECTATION
35
By applying Markov’s inequality to X,
E[X]
2
P (|Y − µY | ≥ a) = P |Y − µY | ≥ a2 = P(X ≥ a2 ) ≤
.
2
a
However, E[X] = E[(Y − µy )2 ] = Var(Y ). Thus we get Chebyshev’s inequality,
P (|Y − µY | ≥ a) ≤
Var(Y )
.
a2
(2.79)
This very useful bound bound shows how the variance of Y controls how close Y
tends to be to its mean; in particular, if a is fixed, the smaller Y is, the smaller
is the probability that Y differ by more than a from µY , while if Y is fixed, the
probability that |Y − µY | is larger than a decays at least as fast as 1/a2 as a → ∞.
We have stated the all-important Law of Large Numbers several in several contexts. Using Chebyshev’s inequality, we are now able to quantify how well the
empirical mean approximates the expected value. Let X1 , X2 , . . . be uncorrelated
random variables all having mean µ and variance σ 2 . Let
X̂
n
1X
=
Xi
n 1
(n) 4
denote the empirical mean of X1 , . . . , Xn . The first step is to compute the mean
and variance of X̂ (n) . For the mean, we use linearity of the expectation.
E[X̂ (n) ] =
n
n
1X
1X
E[Xi ] =
µ = µ.
n 1
n 1
For the variance, we use (2.65 and (2.70):
Var(X̂
(n)
n
X
1
Xi
) = 2 Var
n
1
!
=
n
1 X
nσ 2
σ2
=
.
Var(X
)
=
i
n2 1
n2
n
Now apply Chebyshev’s inequality:
Var(X (n) )
σ2
P |X (n) − µ| > a ≤
=
.
2
a
n
(2.80)
This is a fundamental identity. It implies that
lim P |X (n) − µ| > a = 0,
n→∞
(2.81)
a result called the weak law of large numbers (it is subtly different from the Large
Number Law stated previously which says that the empirical means tends to µ with
probability one.)
36
CHAPTER 2. PROBABILITY THEORY
2.3.6
Conditional Distributions and Conditional Expectations
Let X and Y be two discrete random variables. The conditional probability
mass function of X given Y = y is the function
4
pX|Y (x|y) = P(X = x|Y = y),
where x ranges over the possible values of X.
The conditional expectation of X given Y = y is
4
E[X|Y = y] =
X
xpX|Y (x|y).
x
The concepts are generalized to continuous random variables by replacing probability mass functions by probability densities. If X and Y are jointly continuous
random variables with joint density f(X,Y ) , then the conditional density of X
given Y = y is
( f
(X,Y ) (x,y)
4
fY (y) , if fY (y) > 0;
fX|Y (x|y) =
0,
if fY (y) = 0;
here fY (y) is the density of Y . The conditional expectation of X given Y = y is
4
Z
E[X|Y = y] =
xfX|Y (x|y) dx.
The law of the unconscious statistician—see Theorem 4—holds for conditional
expectations. In the discrete and continous cases, respectively,
E[g(X)|Y = y] =
X
Z
g(x)pX|Y (x|y),
E[g(X)|Y = y] =
g(x)pX|Y (x|y) dx.
x
The rule of total probabilities generalizes and provides a very useful tool for
computing expectations.
Theorem 9 For discrete and continuous random variables, respectively,
E[g(X)] =
X
Z
E[g(X)|Y = y]pY (y)
and
E[g(X)] =
E[g(X)|Y = y]fY (y) dy.
y
(2.82)
This result is particular useful if a problem is defined directly in terms of conditional distributions.
Example 2.21. Assume that X and Y are such that Y is exponential with parameter
λ and for every y > 0, the conditional distribution of X given Y = y is that of a
random variable uniformly distributed on (0, y). This is another way of saying that
f(X,Y ) (x|y) = 1/y if 0 < x < y, and is 0 otherwise. Find E[X].
The mean of a random variable uniformly distributed on (0, y) is y/2. Hence,
we find easily that E[X|Y = y] = y/2. Thus, using (2.82),
Z ∞
Z ∞
y −λy
1
−λy
E[X] =
E[X|Y = y]λe
dy =
e
dy =
.
2
2λ
0
0
Notice that the last integral is just one-half the expected value of Y .
2.4. THE CENTRAL LIMIT THEOREM
2.4
37
The Central Limit Theorem
The Central Limit Theorem explains the importance of the normal distribution. Let
X1 , X2 , . . . be i.i.d. random variables with mean µ and variance σ 2 . Our goal is to
P
understand the probability distribution of the sum n1 Xi for large n. To do this we
will scale the sum by additive and multiplicative factors to create a random variable
P
with mean 0 and variance 1. We know that the sum n1 Xi has mean nµ, and so
n
X
Xi − nµ
1
has a mean of zero. We also know that the variance of
we define
Pn
Xi − nµ
(n) 4
,
Z = 1 √
σ n
Pn
1
Xi = nσ 2 . Therefore, if
we see that E[Z (n) ] = 0 and
Var(Z (n) ) = E
=
" P
n
1 Xi
− nµ
√
σ n
n
X
1
Var
Xi
nσ 2
1
2 #
!
=

!2 
n
X
1

E
Xi − nµ
=
nσ 2
1

1
nσ 2 = 1.
nσ 2
The Central Limit Theorem states that Z (n) looks more and more like a standard
normal r.v. as n → ∞.
Theorem 10 The Central Limit Theorem. Suppose X1 , X2 , . . . are independent, identically distributed random variables with common mean µ and variance
σ 2 . Then
Pn
Z b
dx
2
1 Xi − nµ
√
lim P a <
(2.83)
<b =
e−x /2 √ .
n→∞
σ n
2π
a
This is an amazing theorem because the limit does not depend on the common
distribution of the random variables in the sequence X1 , X2 , . . ..
Historically, the Central Limit Theorem was first proved for binomial random
variables. For each n, let Yn be binomial with parameters n and p. Let X1 , X2 , . . .
be i.i.d. Bernoulli random variables with parameter p. Then we know that for each
Pn
n, the sum
1 Xi is binomial with parameters n and p. Thus,
!
Yn − np
P a< p
<b
np(1−p)
!
Pn
1 Xi − np
=P a< p
<b .
np(1−p)
By applying the Central Limit Theorem to the right-hand side, we obtain the following result.
38
CHAPTER 2. PROBABILITY THEORY
Theorem 11 (DeMoivre-Laplace CLT) Suppose for each integer n that Yn is a
binomial random variable with parameters n and p, p being fixed. Then
!
Yn − np
<b
lim P a < p
n→∞
np(1−p)
Z b
=
e−x
2 /2
a
dx
√ .
2π
(2.84)
p
Paraphrasing Theorem 11, (Yn − np)/ np(1−p) is approximately standard normal for large n. The question is how large should n be for the approximation to
be accurate. The general rule of thumb is that the approximation is accurate if
np(1−p) ≥ 10.
Example 2.22 Let X be binomial with p = .5 and n = 50. What is P(22 ≤ X ≤ 28)?
Since X is discrete, and the central limit theorem approximates it by a continuous
random variable, the approximation will be more accurate if we use the following
continuity correction of the limits:
P(22 ≤ X ≤ 28) = P(21.5 < X < 28.5).
√
From Theorem 11 with n = 50 and p = .5, we have that (X − 25)/ 12.5 is
approximately standard normal. Thus
21.5 − 25
X − 25
28.5 − 25
√
P(21.5 < X < 28.5) = P
< √
< √
12.5
12.5
12.5
Using tables, this turns out to be 0.6778.
2.5
≈ Φ(.99) − Φ(−.99).
Notes
The material in this chapter is standard and may be found, mostly, in any undergraduate probability text. A standard and very good text is that of Sheldon M.
Ross, A First Course in Probability, Prentice-Hall. There are many editions of this
text; any of them is a good reference.