5.2 Continuous random variables

5.2
Continuous random variables
It is often convenient to think of a random variable as having a whole (continuous) interval
for its set of possible values.
The devices used to describe continuous probability distributions differ from those that
describe discrete probability distributions.
Examples of continuous random variables:
24
5.2.1
Probability density functions and cumulative distribution functions
A probability density function (pdf) is the continuous analogue of a discrete random variable’s
probability mass function (pmf).
Definition 5.12. A probability density function (pdf) for a continuous random variable X is
a nonnegative function f (x) with
Z∞
f (x) = 1
−∞
and such that for all a ≤ b,
P [a ≤ X ≤ b] =
Zb
f (x)dx.
a
1.
2.
3.
0.125
0.100
f(x)
0.075
0.050
0.025
0.000
0
10
x
25
20
Example 5.17 (Compass needle). Consider a de-magnetized compass needle mounted at its
center so that it can spin freely. It is spun clockwise and when it comes to rest the angle, θ,
from the vertical, is measured. Let
Y = the angle measured after each spin in radians
What values can Y take?
What form makes sense for f (y)?
If this form is adopted, that what must the pdf be?
26
Using this pdf, calculate the following probabilities:
1. P [Y < π2 ]
2. P [ π2 < Y < 2π]
3. P [ π6 < Y < π4 ]
4. P [Y = π6 ]
27
Definition 5.13. The cumulative distribution function (cdf) of a continuous random variable
X is a function F such that
F (x) = P [X ≤ x] =
Zx
f (t)dt
−∞
F (x) is obtained from f (x) by integration, and applying the fundamental theorem of calculus
yields
d
F (x) = f (x).
dx
That is, f (x) is obtained from F (x) by differentiation.
As with discrete random variables, F has the following properties:
1.
2.
3.
4.
28
Example 5.18 (Compass needle, cont’d). Recall the compass needle example, with
f (x) =



1
0 ≤ y ≤ 2π


0
otherwise
2π
Find the cdf.
For y < 0
For 0 ≤ y ≤ 2π
For y > 2π
29
Calculate the following using the cdf:
F (1.5)
P [Y ≤
4π
]
5
P [Y > 5]
P [ π3 < Y ≤ π2 ]
30
5.2.2
Quantiles
Recall:
Definition 5.14. The p-quantile of a random variable, X, is the number Q(p) such that
P [X ≤ Q(p)] = p.
In terms of the cumulative distribution function (for a continuous random variable),
Example 5.19 (Compass needle, cont’d). Recall the compass needle example, with
f (x) =



1
0 ≤ y ≤ 2π


0
otherwise
2π
Q(.95):
31
You can also calculate quantiles directly from the cdf.
F (x) =





0




y<0
1
y

2π






1
Q(.25):
Q(.5)
32
0 ≤ y ≤ 2π
otherwise
5.2.3
Means and variances for continuous distributions
It is possible to summarize continuous probability distributions using
1.
2.
3.
Definition 5.15. The mean or expected value of a continuous random variable X is
EX =
Z∞
xf (x)dx.
−∞
Example 5.20 (Compass needle, cont’d). Calculate EY where Y is the angle from vertical
in radians that a spun needle lands on.
f (y) =



1
0 ≤ y ≤ 2π


0
otherwise
2π
33
Example 5.21. Calculate EX where X follows the following distribution



0
x<0

 1 e−x/3
x≥0
f (x) = 
3
Definition 5.16. The variance of a continuous random variable X is
VarX =
Z∞
2
(x − EX) f (x)dx =
−∞
The standard deviation of X is
Z∞
−∞
√
VarX.
34
x2 f (x)dx − (EX)2 .
Example 5.22 (Library books). Let X denote the amount of time for which a book on
2-hour hold reserve at a college library is checked out by a randomly selected student and
suppose its density function is
f (x) =



0.5x
0≤x≤2


0
otherwise
Calculate EX and VarX.
35
Example 5.23 (Ecology). An ecologist wishes to mark off a circular sampling region having
radius 10m. However, the radius of the resulting region is actually a random variable R with
pdf



 3 (10 − r)2
9 ≤ r ≤ 11

0
otherwise
f (r) = 
2
Calculate ER and SD(R).
36
Why does EX 2 =
R∞
x2 f (x)dx?
−∞
Example 5.24 (Ecology, cont’d). Calculate the expected area of the circular sampling
region.
37
For a linear function, g(x) = ax + b, where a and b are constants,
E(ax + b)
Var(ax + b)
Example 5.25 (Ecology, cont’d). Calculate the expected value and variance of the diameter
of the circular sampling region.
38
Definition 5.17. Standardization is the process of transforming a random variable, X, into
the signed number of standard deviations by which it is is above its mean value.
Z=
X − EX
SD(X)
Z has mean 0
Z has variance (and standard deviation) 1
39
5.2.4
A special continuous distribution
Just as there are a number of useful discrete distributions commonly applied to engineering
problems, there are a number of standard continuous probability distributions.
Definition 5.18. The exponential(α) distribution is a continuous probability distribution
with probability density function
f (x) =



 1 e−x/α
x>0


0
otherwise
α
for α > 0.
2.0
f(x)
1.5
α
0.5
1.0
1
2
0.5
0.0
0
1
2
3
4
5
x
An Exp(α) random variable measures the waiting time until a specific event that has an
equal chance of happening at any point in time.
Examples:
40
It is straightforward to show for X ∼ Exp(α),
1. µ = EX =
R∞ 1 −x/α
dx
x e
0
α
=
R∞
2. σ 2 = VarX = (x − α)2 α1 e−x/α dx =
0
Further, F (x) has a simple formulation:
41
Example 5.26 (Library arrivals, cont’d). Recall the example the arrival rate of students
at Parks library between 12:00 and 12:10pm early in the week to be about 12.5 students
per minute. That translates to a 1/12.5 = .08 minute average waiting time between student
arrivals.
Consider observing the entrance to Parks library at exactly noon next Tuesday and define
the random variable
T = the waiting time (min) until the first student passes through the door.
Using T ∼ Exp(.08), what is the probability of waiting more than 10 seconds (1/6 min) for
the first arrival?
What is the probability of waiting less than 5 seconds?
42
5.2.5
The Normal distribution
We have already seen the normal distribution as a “bell shaped” distribution, but we can
formalize this.
Definition 5.19. The normal or Gaussian(µ, σ 2 ) distribution is a continuous probability
distribution with probability density
1
2
2
f (x) = √
e−(x−µ) /2σ
2πσ 2
for all x
for σ > 0.
A normal random variable is (often) a finite average of many repeated, independent, identical
trials.
It is not obvious, but
1.
R∞
f (x)dx =
−∞
2. EX =
R∞
−∞
3. VarX =
R∞
2
2
√ 1 e−(x−µ) /2σ dx
2
2πσ
−∞
2 /2σ 2
1
x √2πσ
e−(x−µ)
2
R∞
−∞
=
dx =
1
(x − µ)2 √2πσ
e−(x−µ)
2
2 /2σ 2
dx =
43
The Calculus I methods of evaluating integrals via anti-differentiation will fail when it comes
to normal densities. They do not have anti-derivatives that are expressible in terms of
elementary functions.
The use of tables for evaluating normal probabilities depends on the following relationship. If
X ∼ Normal(µ, σ 2 ),
P [a ≤ X ≤ b] =
Zb
a
1
2
2
√
e−(x−µ) /2σ dx =
2πσ 2
(b−µ)/σ
Z
(a−µ)/σ
"
b−µ
a−µ
1
2
√ e−z /2 dz = P
≤Z≤
σ
σ
2π
#
where Z ∼ Normal(0, 1).
Definition 5.20. The normal distribution with µ = 0 and σ = 1 is called the standard
normal distribution.
So, we can find probabilities for all normal distributions by tabulating probabilities for only
the standard normal distribution. We will use a table of the standard normal cumulative
probability function.
Φ(z) = F (z) =
Zz
−∞
44
1
2
√ e−t dt.
2π
Example 5.27 (Standard normal probabilities). P [Z < 1.76]
P [.57 < Z < 1.32]
We can also do it in reverse, find z such that P [−z < Z < z] = .95.
45
Example 5.28 (Baby food). J. Fisher, in his article Computer Assisted Net Weight Control
(Quality Progress, June 1983), discusses the filling of food containers with strained plums and
tapioca by weight. The mean of the values portrayed is about 137.2g, the standard deviation
is about 1.6g, and data look bell-shaped. Let
W = the next fill weight.
Let’s find the probability that the next jar contains less food by mass than it’s supposed to
(declared weight = 135.05g).
46
47
Example 5.29 (More normal probabilities). Using the standard normal table, calculate the
following:
P (X ≤ 3), X ∼ Normal(2, 64)
P (X > 7), X ∼ Normal(6, 9)
P (|X − 1| > 0.5), X ∼ Normal(2, 4)
48
We can find standard normal quantiles by using the standard normal table in reverse.
Example 5.30 (Baby food, cont’d). For the jar weights X ∼ Normal(137.2, 1.622 ), find
Q(0.1).
49
Example 5.31 (Normal quantiles). Find:
Q(0.95) of X ∼ Normal(9, 3) .
c such that P (|X − 2| > c) = 0.01, X ∼ Normal(2, 4)
50
Table B.3
Standard Normal Cumulative Probabilities
!(z) =
z
788
!
z
−∞
#
"
1
t2
dt
exp −
√
2
2π
.00
.01
.02
.03
.04
.05
.06
.07
.08
.09
−3.4
−3.3
−3.2
−3.1
−3.0
.0003
.0005
.0007
.0010
.0013
.0003
.0005
.0007
.0009
.0013
.0003
.0005
.0006
.0009
.0013
.0003
.0004
.0006
.0009
.0012
.0003
.0004
.0006
.0008
.0012
.0003
.0004
.0006
.0008
.0011
.0003
.0004
.0006
.0008
.0011
.0003
.0004
.0005
.0008
.0011
.0003
.0004
.0005
.0007
.0010
.0002
.0003
.0005
.0007
.0010
−2.9
−2.8
−2.7
−2.6
−2.5
.0019
.0026
.0035
.0047
.0062
.0018
.0025
.0034
.0045
.0060
.0018
.0024
.0033
.0044
.0059
.0017
.0023
.0032
.0043
.0057
.0016
.0023
.0031
.0041
.0055
.0016
.0022
.0030
.0040
.0054
.0015
.0021
.0029
.0039
.0052
.0015
.0021
.0028
.0038
.0051
.0014
.0020
.0027
.0037
.0049
.0014
.0019
.0026
.0036
.0048
−2.4
−2.3
−2.2
−2.1
−2.0
.0082
.0107
.0139
.0179
.0228
.0080
.0104
.0136
.0174
.0222
.0078
.0102
.0132
.0170
.0217
.0075
.0099
.0129
.0166
.0212
.0073
.0096
.0125
.0162
.0207
.0071
.0094
.0122
.0158
.0202
.0069
.0091
.0119
.0154
.0197
.0068
.0089
.0116
.0150
.0192
.0066
.0087
.0113
.0146
.0188
.0064
.0084
.0110
.0143
.0183
−1.9
−1.8
−1.7
−1.6
−1.5
.0287
.0359
.0446
.0548
.0668
.0281
.0351
.0436
.0537
.0655
.0274
.0344
.0427
.0526
.0643
.0268
.0336
.0418
.0516
.0630
.0262
.0329
.0409
.0505
.0618
.0256
.0322
.0401
.0495
.0606
.0250
.0314
.0392
.0485
.0594
.0244
.0307
.0384
.0475
.0582
.0239
.0301
.0375
.0465
.0571
.0233
.0294
.0367
.0455
.0559
−1.4
−1.3
−1.2
−1.1
−1.0
.0808
.0968
.1151
.1357
.1587
.0793
.0951
.1131
.1335
.1562
.0778
.0934
.1112
.1314
.1539
.0764
.0918
.1093
.1292
.1515
.0749
.0901
.1075
.1271
.1492
.0735
.0885
.1056
.1251
.1469
.0721
.0869
.1038
.1230
.1446
.0708
.0853
.1020
.1210
.1423
.0694
.0838
.1003
.1190
.1401
.0681
.0823
.0985
.1170
.1379
−0.9
−0.8
−0.7
−0.6
−0.5
.1841
.2119
.2420
.2743
.3085
.1814
.2090
.2389
.2709
.3050
.1788
.2061
.2358
.2676
.3015
.1762
.2033
.2327
.2643
.2981
.1736
.2005
.2297
.2611
.2946
.1711
.1977
.2266
.2578
.2912
.1685
.1949
.2236
.2546
.2877
.1660
.1922
.2206
.2514
.2843
.1635
.1894
.2177
.2483
.2810
.1611
.1867
.2148
.2451
.2776
−0.4
−0.3
−0.2
−0.1
−0.0
.3446
.3821
.4207
.4602
.5000
.3409
.3783
.4168
.4562
.4960
.3372
.3745
.4129
.4522
.4920
.3336
.3707
.4090
.4483
.4880
.3300
.3669
.4052
.4443
.4840
.3264
.3632
.4013
.4404
.4801
.3228
.3594
.3974
.4364
.4761
.3192
.3557
.3936
.4325
.4721
.3156
.3520
.3897
.4286
.4681
.3121
.3483
.3859
.4247
.4641
Table B.3
Standard Normal Cumulative Probabilities (continued)
z
.00
.01
.02
.03
.04
.05
.06
.07
.08
.09
0.0
0.1
0.2
0.3
0.4
.5000
.5398
.5793
.6179
.6554
.5040
.5438
.5832
.6217
.6591
.5080
.5478
.5871
.6255
.6628
.5120
.5517
.5910
.6293
.6664
.5160
.5557
.5948
.6331
.6700
.5199
.5596
.5987
.6368
.6736
.5239
.5636
.6026
.6406
.6772
.5279
.5675
.6064
.6443
.6808
.5319
.5714
.6103
.6480
.6844
.5359
.5753
.6141
.6517
.6879
0.5
0.6
0.7
0.8
0.9
.6915
.7257
.7580
.7881
.8159
.6950
.7291
.7611
.7910
.8186
.6985
.7324
.7642
.7939
.8212
.7019
.7357
.7673
.7967
.8238
.7054
.7389
.7704
.7995
.8264
.7088
.7422
.7734
.8023
.8289
.7123
.7454
.7764
.8051
.8315
.7157
.7486
.7794
.8078
.8340
.7190
.7517
.7823
.8106
.8365
.7224
.7549
.7852
.8133
.8389
1.0
1.1
1.2
1.3
1.4
.8413
.8643
.8849
.9032
.9192
.8438
.8665
.8869
.9049
.9207
.8461
.8686
.8888
.9066
.9222
.8485
.8708
.8907
.9082
.9236
.8508
.8729
.8925
.9099
.9251
.8531
.8749
.8944
.9115
.9265
.8554
.8770
.8962
.9131
.9279
.8577
.8790
.8980
.9147
.9292
.8599
.8810
.8997
.9162
.9306
.8621
.8830
.9015
.9177
.9319
1.5
1.6
1.7
1.8
1.9
.9332
.9452
.9554
.9641
.9713
.9345
.9463
.9564
.9649
.9719
.9357
.9474
.9573
.9656
.9726
.9370
.9484
.9582
.9664
.9732
.9382
.9495
.9591
.9671
.9738
.9394
.9505
.9599
.9678
.9744
.9406
.9515
.9608
.9686
.9750
.9418
.9525
.9616
.9693
.9756
.9429
.9535
.9625
.9699
.9761
.9441
.9545
.9633
.9706
.9767
2.0
2.1
2.2
2.3
2.4
.9773
.9821
.9861
.9893
.9918
.9778
.9826
.9864
.9896
.9920
.9783
.9830
.9868
.9898
.9922
.9788
.9834
.9871
.9901
.9925
.9793
.9838
.9875
.9904
.9927
.9798
.9842
.9878
.9906
.9929
.9803
.9846
.9881
.9909
.9931
.9808
.9850
.9884
.9911
.9932
.9812
.9854
.9887
.9913
.9934
.9817
.9857
.9890
.9916
.9936
2.5
2.6
2.7
2.8
2.9
.9938
.9953
.9965
.9974
.9981
.9940
.9955
.9966
.9975
.9982
.9941
.9956
.9967
.9976
.9983
.9943
.9957
.9968
.9977
.9983
.9945
.9959
.9969
.9977
.9984
.9946
.9960
.9970
.9978
.9984
.9948
.9961
.9971
.9979
.9985
.9949
.9962
.9972
.9979
.9985
.9951
.9963
.9973
.9980
.9986
.9952
.9964
.9974
.9981
.9986
3.0
3.1
3.2
3.3
3.4
.9987
.9990
.9993
.9995
.9997
.9987
.9991
.9993
.9995
.9997
.9987
.9991
.9994
.9996
.9997
.9988
.9991
.9994
.9996
.9997
.9988
.9992
.9994
.9996
.9997
.9989
.9992
.9994
.9996
.9997
.9989
.9992
.9994
.9996
.9997
.9989
.9992
.9995
.9996
.9997
.9990
.9993
.9995
.9996
.9997
.9990
.9993
.9995
.9997
.9998
This table was generated using MINITAB.
789