4
RANDOM VARIABLES
Basic Concepts
Definition 4.1. A random variable is a function X(·) which assigns to each
element ω of Ω a real value X(ω).
Notation:
¡
¢
¡
¢
P {ω ∈ Ω : X(ω) = a} ≡ P {X(ω) = a} ≡ P (X = a)
(1)
Cumulative Distribution Function
Definition 4.2. For any random variable X , we define the cumulative distribution function (CDF), FX (a), as
FX (a) ≡ P (X ≤ a).
Properties of any cumulative distribution function
1. lim FX (a) = 1
a→∞
2.
lim FX (a) = 0
a→−∞
3. FX (a) is a nondecreasing function of a.
4. FX (x) is a right-continuous function of a. In other words, lim FX (x) =
x↓a
FX (a)
Theorem 4.1. For any random variable X and real values a < b
P (a < X ≤ b) = FX (b) − FX (a)
81
82
Discrete Random Variables
1
0.8
F(x)
0.6
0.4
0.2
0
0
1
2
3
4
5
x
Figure 4.1: An example of a CDF for a discrete random variable
Discrete Random Variables
Definition 4.3. The probability mass function (pmf) of a discrete random
variable X is given by
pX (xi ) ≡ P (X = xi ).
Properties of any probability mass function
1. pX (x) ≥ 0 for every x
X
2.
pX (xi ) = 1
all xi
3. P (E) =
X
pX (xi )
xi ∈E
Examples of discrete distributions
The binomial distribution
Consider a sequence of n independent, repeated trials, with each trial having two
possible outcomes, success or failure. Let p be the probability of a success for any
Discrete Random Variables
83
single trial. Let X denote the number of successes on n trials. The random variable
X is said to have a binomial distribution and has probability mass function
à !
(2)
pX (k) =
n k
p (1 − p)(n−k)
k
for k = 0, 1, . . . , n.
Let’s check to make sure that if X has a binomial distribution, then
1. We will need the binomial expansion for any polynomial:
n
(a + b) =
n
X
k=0
Pn
k=0 pX (k)
=
à !
n k (n−k)
a b
k
So
n
X
k=0
à !
n k
p (1 − p)(n−k) = (p + (1 − p))n
k
= (1)n = 1
The geometric distribution
Recall the “success/failure” model that was used to derive the binomial distribution,
but, this time, we want to find the probability that the first success occurs on the
k th trial. Let p be the probability of a success for any single trial and let Y denote
the number of trials needed to obtain the first success.
Then the probability mass function for Y is given by
pY (k) = (1 − p)k−1 p
for k = 1, 2, . . ..
and the random variable Y is said to have a geometric distribution with parameter
p.
Some facts about geometric series When working with the geometric
distribution, the following results are often helpful:
Theorem 4.2. For any 0 < θ < 1,
∞
X
j=0
θj =
1
1−θ
84
Discrete Random Variables
Proof.
∞
X
θ0 =
θj −
∞
X
θj
j=0
∞
X
j
j=1
∞
X
j+1
j=0
∞
X
j=0
∞
X
θ −
θ
θj − θ
j=0
(1 − θ)
j=0
∞
X
j=0
∞
X
= 1
= 1
θj
= 1
θj
= 1
θj
=
j=0
1
1−θ
Theorem 4.3. For any 0 < θ < 1,
∞
X
jθj =
j=0
θ
(1 − θ)2
Proof. Using Theorem 4.2
∞
X
θj
=
1
1−θ
θj
=
d 1
dθ 1 − θ
∞
X
d j
θ
=
1
(1 − θ)2
jθj−1 =
1
(1 − θ)2
d
dθ
j=0
∞
X
j=0
dθ
j=0
∞
X
θ
j=1
∞
X
−1
j=1
jθj
=
1
(1 − θ)2
Continuous Random Variables
85
∞
X
j=1
∞
X
jθj
=
θ
(1 − θ)2
jθj
=
θ
(1 − θ)2
j=0
Example: Using Theorem 4.2, we can show that the probability mass function for
a geometric random variable must sum to one. Suppose
pY (k) = (1 − p)k−1 p
for k = 1, 2, . . ..
then
∞
X
pY (k) =
k=1
=
∞
X
(1 − p)k−1 p
k=1
∞
X
(1 − p)j p
j=0
=
1
p=1
1 − (1 − p)
The Poisson distribution
The random variable X is said to have a Poisson distribution with parameter α if
X has a probability mass function given by
(3)
pX (k) = e−α
αk
k!
for k = 0, 1, 2, . . .
Continuous Random Variables
Definition 4.4. The probability density function (pdf) of a continuous random
variable X is given by
d
FX (a).
fX (a) ≡
da
86
Continuous Random Variables
1
0.8
F(x)
0.6
0.4
0.2
0
0
1
2
3
4
5
x
Figure 4.2: An example of a CDF for a continuous random variable
Properties of any probability density function
1. fX (x) ≥ 0 for every x
Z ∞
2.
−∞
fX (x) dx = 1
3. P (E) =
Z
E
fX (x) dx
Theorem 4.4. If X is a continuous random variable then
P (X < a) = P (X ≤ a).
Theorem 4.5. If X is a continuous random variable, then
(4)
FX (a) =
Z a
−∞
fX (x) dx.
Differential mass interpretation of the pdf
(5)
P (X ∈ [x, x + dx)) = fX (x)dx
Expectation
87
Area = f(x)dx
f(x)
x
x+dx
Figure 4.3: The differential mass interpretation of fX (x) dx
Examples of continuous distributions
The uniform distribution
The random variable X is said to have a uniform distribution if it has the probability density function given by
(6)
1
fX (x) =
b−a
0
if a ≤ x ≤ b
otherwise
The exponential distribution
The random variable X is said to have an exponential distribution with parameter
α if it has the probability density function given by
(
(7)
fX (x) =
αe−αx if x ≥ 0
0
otherwise
Expectation
Definition 4.5. The expected value of a random variable X is given by
E(X) =
X
all x
xpX (x)
Finding E(X) for some specific distributions
88
for discrete X , and
E(X) =
Z +∞
−∞
xfX (x) dx
for continuous X .
Note:
• E(X) is a number.
• The expected value of X is often called the “mean of X .”
• The value of E(X) can be interpreted as the center of mass or center of
gravity of the probability mass distribution for X .
Example: Roll a fair die once. Let X equal the number of pips. Then
E(X) =
X
xpX (x)
all x
= (1)pX (1) + (2)pX (2) + (3)pX (3) + (4)pX (4) + (5)pX (5) + (6)pX (6)
= (1) 61 + (2) 16 + (3) 16 + (4) 16 + (5) 16 + (6) 16 = 3.5
Example: Let X be a continuous random variable with probability density function
(
fX (x) =
Then
E(X) =
=
=
Z +∞
−∞
Z 0
−∞
Z 1
0
2x 0 ≤ x ≤ 1
0 otherwise
xfX (x) dx
x0 dx +
Z 1
0
x2x dx +
x0 dx
1
¯1
2x3 ¯¯
2x dx =
¯ =
3 ¯0
2
Z +∞
2
3
−0=
2
3
Finding E(X) for some specific distributions
Binomial distribution
Let X be a binomial random variable with parameters n and p. We have
à !
pX (k) =
n k
p (1 − p)n−k
k
for k = 0, 1, . . . , n.
Finding E(X) for some specific distributions
89
We can find E(X) as follows:
E(X) =
=
n
X
à !
n k
p (1 − p)n−k
k
k
k=0
n
X
n!
pk (1 − p)n−k
(k
−
1
)
!
(n
−
k)
!
k=1
Let s = k − 1
n−1
X
Ã
!
n − 1 s+1
E(X) =
n
p (1 − p)(n−s−1)
s
s=0
= np
= np
n−1
X
Ã
s=0
!
n−1 s
p (1 − p)(n−s−1)
s
Poisson distribution
Let X be a Poisson random variable with parameter α. We have
pX (k) = e−α
αk
k!
for k = 0, 1, 2, . . .
The expected value for X can be found as follows:
E(X) =
=
∞
X
k=0
∞
X
ke−α
e−α
k=1
αk
k!
αk
(k − 1)!
Let s = k − 1
E(X) =
∞
X
e−α
s=0
∞
X
= α
= α
s=0
αs+1
s!
e−α
αs
s!
90
Other measures of central tendency
Geometric distribution
Let X be a geometric random variable with parameter p. We have
pX (k) = (1 − p)k−1 p
for k = 1, 2, . . ..
To find E(X) let q = (1 − p) and then,
E(X) =
∞
X
kq k−1 p
k=1
∞
X
= p
kq k−1
k=1
= p
1
1
=
2
(1 − q)
p
Exponential distribution
Let X be an exponential random variable with parameter α. We have
(
fX (x) =
αe−αx if x ≥ 0
0
otherwise
We can get E(X) as follows:
E(X) =
Z ∞
xαe−αx dx
0
Integrate this by parts, letting dv = αe−αx dx and x = u. Hence v = −e−αx and
du = dx. This produces
¯∞
E(X) = −xe−αx ¯0 +
Z ∞
0
e−αx dx =
1
α
Other measures of central tendency
Definition 4.6. The median of a random variable X , denoted by m 1 (X), is given
2
by
1
P (X ≤ m 1 (X)) = .
2
2
Self-Test Exercises for Chapter 4
91
Definition 4.7. The mode of a random variable X , denoted by Mode(X), is
given by
pX (Mode(X)) = max pX (x)
if X is discrete
x
fX (Mode(X)) = sup fX (x)
x
if X is continuous
Self-Test Exercises for Chapter 4
For each of the following multiple-choice questions, choose the best response
among those provided. Answers can be found in Appendix B.
S4.1 If X is a discrete random variable with
P (X = −1) = P (X = 0) = P (X = 1) = 1/3
Then the expected value of X , E(X), equals
(A) 0
(B) 1/3
(C) 1/2
(D) any of the values in the set {−1, 0, 1}
(E) none of the above are true.
S4.2 Suppose X is a continuous random variable with cumulative distribution
function
if x < 0
0
x2
if 0 ≤ x ≤ 1
FX (x) =
1
if x > 1
Then P (X < 0.50) equals
(A) 0
(B) 0.25
(C) 0.50
√
(D) 0.50
(E) none of the above.
© Copyright 2026 Paperzz