1. Some commonly used formulas: (1) For any random variable Y, E

1. Some commonly used formulas:
(1) For any random variable Y,
EY 2  = VarY + EY 2 ,
VarY = EY 2  − EY 2 ;
with any constant a, and b,
Ea + bY = a + bEY,
Vara + bY = b 2 VarY;
(2) For any two random variables Y 1 and Y 2 ,
EaY 1 + bY 2  = aEY 1  + bEY 2 ,
VaraY 1 + bY 2  = a 2 VarY 1  + b 2 VarY 2 
+ 2abCovY 1 , Y 2 ;
If Y 1 and Y 2 are independent, then
VaraY 1 + bY 2  = a 2 VarY 1  + b 2 VarY 2 .
2. Some theorems or results are relevant to
this course and some of their proofs are
beyond the scope of this course:
B(1) Let g X t and g Y t be the moment
generating functions of random variables X
and Y respectively. If g X t = g Y t for all
values of t, then X and Y have the identical
distribution.
B(2) A result of B(1)
If X ∼ Nμ, σ 2 , then Z =
g X t = e
2
μt+ σ2 t 2
X−μ
σ
for Nμ, σ 2 .
∼ N0, 1. Note:
B(3) Let X 1 , X 2 , ..., X n be n independent
random variables with moment generating
functions: g X t, g X t, . . . , g Xn t respectively.
1
2
If W = X 1 + X 2 +...+X n =
n
∑ X i , then the
i=1
moment generating function of W is:
n
g w t = g X tg X t. . . g Xn t =
1
2
∏g
i=1
namely,
n
g
t =
n
∑X
i=1
i
∏g
i=1
Xi
t.
Xi
t,
B(4) Let X 1 , X 2 , ..., X n be n independent
normally distributed random variables with
EX i  = μ i , and VarX i  = σ 2i , i = 1, 2, . . . , n.
i.e.: X i ∼ Nμ i , σ 2i . Let a 1 , a 2 , . . . , a n be any n
constants.
n
If U = a 1 X 1 + a 2 X 2 +. . . +a n X n =
∑ a i X i , then
i=1
U is normally distributed with
EU =
n
n
i=1
i=1
∑ a i μ i , and VarU = ∑ a 2i σ 2i .
Namely, any linear combinations of normally
distributed random variables are still normally
distributed.
Chapter 7 Sampling distributions and the
central limit theorem
Purpose of study: Making inference for a
population using sample data.
Making Inferences:
(1) Point Estimation;
(2) Interval Estimation;
(3) Hypothesis Testing.
We mimic the point estimation with a shooting
competition.
Three players A, B, and C: Say the center of
the target is your population parameter
(population mean, for instance) that you are
estimating.
Each player shoots 5 times, and their scores
are recorded.
Which player gives the best performance?
A parameter– the target
An estimate– each shooting score
An estimator – a shooter
Which estimator is the best? How would we
judge them?
(1) the closeness to the center (target); and
(2) with small variability.
Player A’s: Not close to the centre but with
stable performance;
Player B’s: Close to the centre and with
relatively stable performance;
Player C’s: Not close to the center, and not
stable either.
An estimator– a rule or formula that tells how
to calculate the value of an estimate based
on sample observations.
A good estimator has to be (1) targeted at the
center and (2) with a small variance.
Let θ be a population parameter, and θ̂ is an
point estimator for θ. Note that: An estimator
is a statistic so its value changes from one
sample to another.
In order to judge how good your estimators
are, we need to study the distribution of these
estimators.
Two most important estimators are:
(1) X̄ for population mean μ : μ̂ = X̄ ;
n
(2) S 2 =
1
n−1
∑X i − X̄  2 for population
i=1
variance σ 2 : σ̂ 2 = S 2 .
For instance, X̄ is a statistic.
How do we find the sampling distribution of
X̄ ?
Population: A. 90, B. 80, C.86, D. 76, E. 82,
F. 96 (6 students, true mean μ is 85)
Sample size n = 2
Repeated sampling: Random samples are
recorded as
ID
Sample X 1 X 2 X̄ =
X 1 +X 2
2
μ̂
true μ
1
A,D
90 76
83
83
85
2
B,E
80 82
81
81
85
C,D
86 76
81
81
85
...
100
Sampling Distributions Related to the Normal
Distribution
i.i.d.
If X i ∼ Nμ, σ 2 , i = 1, 2, . . . , n.
Then, (1)
i.i.d.
X i − μ ∼ N0, σ 2 
X i −μ i.i.d.
∼ N0, 1,
σ
2
i.i.d.
X i −μ
2
∼
χ
1, and
σ
n
∑
X i −μ
σ
2
∼ χ 2 n;
i=1
n
(2) with X̄ =
1
n
∑ Xi,
i=1
̄X ∼ Nμ, σn2 ,
X̄ −μ
∼ N0, 1, and
σ/ n
X̄ −μ
σ/ n
2
=
nX̄ −μ 2
σ2
∼ χ 2 1;
n
(3) with S 2 =
1
n−1
∑X i − X̄  2 ,
i=1
n
∑X −X̄ 
i
n−1
σ2
S =
2
i=1
σ2
2
n
=
∑
X i −X̄
σ
2
∼ χ 2 n − 1,
i=1
and
X̄ and S 2 are independent.
Proof of Theorem 1.
Examples 7.2 and 7.3
Reading Pages: 1-15; and 346-355.