Some General Concept..

1


Suppose we want to estimate a parameter of
a single population (e.g.  or  2 ) based on a
random sample X 1 , , X n , or a parameter of
more than one sample (e.g. 1  2 , the
difference between the means for samples
X 1 , , X n and Y1 , , Yn ).
At times we use  to represent a generic
parameter.
2

A point estimate of a parameter  is a single
number that can be regarded as a sensible
value for  .

A point estimate is obtained by selecting a
suitable statistic and computing its value from
the given sample data. The selected statistic is
called the point estimator of  .
3

Consider the following observations on
dialectic breakdown voltage for 20 pieces of
epoxy resin:
24.46 25.61
26.25
26.42
26.66 27.15
27.31
27.54
27.74
27.94
27.98
28.28
28.49
28.50
29.11
29.13
29.50
30.88
28.04
28.76
Estimators and estimates for  :
Estimator = X , estimate x    xi / n   27.793
Estimator = X , estimate x   27.94  27.98 / 2  27.96

4

Another estimator is X tr (10)
, where the
smallest and largest 10% of the data points
are deleted, and the others are averaged.
The estimate xtr (10) is
 x  24.46  25.61  29.50  30.88  27.838
i
20  4
5



Each of those estimators uses a different
measure of the center of the sample to
estimate  .
Which is closest to the true value? We can’t
answer that without knowing the true value.
Which will tend to produce estimates closest
to the true value?
6



In the best of worlds, we would want an
estimator ˆ for which
always.
ˆ  
However, ˆ is random.
We want an estimator for which the
estimator error is small.
One criterion is to choose an estimator to
2
minimize the mean square error E ˆ    .
However, the MSE will generally depend on
the value of  .
7

A way around this dilemma is to restrict
attention to estimators that have some
specified property and then find the best
estimator in the restricted class.

A popular property is unbiasedness.
8

Suppose we have two instruments for
measurement and one has been accurately
calibrated, but the other systematically gives
readings smaller than the true value being
measured.

The measurements from the first instrument will
average out to the true value, and the
instrument is called an unbiased instrument.
The measurements from the second instrument
have a systematic error component or bias.
9

A point estimator ˆ is said to be an unbiased
estimator of  if E ˆ   for every possible
value of  . If ˆ is not unbiased, the
difference E ˆ   is called the bias of ˆ .


10

We typically don’t need to know the
parameter to determine if an estimator is
unbiased.

For example, for a binomial rv, the sample
proportion X / n is unbiased, since
1
1
E  X / n   E  X    np   p
n
n
11



Suppose that
X , the reaction time to a
certain stimulus, has a uniform distribution on
the interval
 0,  . We might think to
estimate  using ˆ1  max  X 1 , , X n 
ˆ must be biased, since all observations are
less than or equal to  .
It can be shown that
 
n
ˆ
E 1 
 
n 1
12

We can easily modify ˆ1 to get an unbiased
estimator for  , simply take
n 1 ˆ
ˆ
2 
1
n
13

When choosing among several different
estimators of  , select one that is unbiased.
14

Let X 1 , , X n be a random sample from a
distribution with mean  and variance  2 .
Then the estimator
ˆ  S
2
2
X



i
X
2
n 1
is unbiased for estimating  2 .
15

Recall that V Y   E Y
   E Y 

X






2
2
. Then
2


 1 
i
2
2
ES   E 
Xi


n

 n  1 

2 
1 
1 
2


E
X

E
X







i
i


n 1 
n

2 
1 
1
2
2

      V   X i    E   X i   
n 1 
n



16
… which equals
1  2
1 2 1
2
2

n  n  n   n  
n 1 
n
n

1
2
2
2

n







n 1
as desired.
17

The estimator
expectation
 X
i
X
2
n
then has
  X  X 2  n  1
n 1 2

i
2

E
E S  

n
n
n





Its bias is n  1 2   2   1  2 .
n
n
18

Unfortunately, though S 2 is unbiased for  2 ,
S is not unbiased for  . Taking the square
root messes up the property of unbiasedness.
19

X1, , X n
If
is a random sample from a
distribution with mean  , then X is an
unbiased estimator of  .

If in addition the distribution is continuous
and symmetric, then X and any trimmed
mean are also unbiased estimators of  .
20

Among all estimators of  that are unbiased,
choose the one that has minimum variance.
The resulting ˆ is called the minimum
variance unbiased estimator (MVUE) of  .
21

We argued that for a random sample from
the uniform distribution on 0,  ,
n 1
ˆ
2 
max  X 1 ,
n
, Xn 
is unbiased for  . Since E  X i    / 2 , ˆ3  2X
is also unbiased for  .
22
 
2
ˆ
 Now V  2   /  n  n  2   (Exercise 32) and
V ˆ3   2 / 3n . As long as n  2  3 , or n  1 ,
ˆ has the smaller variance.
 
2
But how do we show that it has the minimum
variance of all unbiased estimators? Results
on MVUEs for certain distributions have been
derived, the most important of which follows.
23
Let X 1 , , X n be a random sample from a
normal distribution with parameters  and
 2 . Then the estimator ˆ  X is MVUE for
.

24

Note that the last theorem doesn’t say that
X should be used to estimate  for any
distribution.
For a heavy-tailed distribution like the Cauchy,
1
,   x   , one is
f  x 
2

 1  x    


better off using X (the UMVU is not known).
25



The standard error of an estimator is its
standard deviation.
The standard error gives an idea of a typical
deviation of the estimator from its mean.
When ˆ
has approximately a normal
distribution, then we can say with reasonable
confidence that the true value of  lies within
approximately 2 standard errors of ˆ .
26