•
JACKKNIFING MAXIMUM LIKELIHOOD ESTIMATES
IN THE CASE OF A SINGLE PARAMETER
By
R. A. Ferguson and J. G. Fryer
Department of Biostatistics
University of North Carolina at Chapel Hi,ll
and
University of Exeter, England
Institute of Statistics Mimeo Series No. 891
SEPTEMBER 1973
The main purpose of this paper is to illustrate and quantify the
effect that jackknifi1l8 can have on the lower sampling JI10ments of a
maximum likelihood estimate.
i,s also cpnsidored.
.
c.:onvergenc~
Estimation of variance of the t.wo estimatOrs
In addition, we check on things like elle rato of
of the distributions of the two basic estimatOR tp not1l1a1ity,
inte~vals
and the precision of apPt'oximnte confidence
~~Q
based on them.
First
correct existing series formulae for the lower order moments of the two
estimatea and estimate 8 of variance and give sornG extensions.
Gee how things
wor~
out in some examples.
Then we
Simulations are used (when
necessary) to check on the adequacy of the series approximAtions.
In
two of the three examples that we discuss, jackknifing eRn be considered
to be an im1"fOvemJllt on the Ihadmum Hkelihood utimate on the whole, though
the degroe of
impr()Venl~nt
varies iii lot depencjing on the measure concerned.
In,the third illustrn~iQn jackknifing al~aYG improves bias and usually
vadlmee..
But there is little bias i.n tho maximl.1\U lil<elihood estimlltc anyway and
the ptl.eentltge changes in vadance are lIGudly snlall.
Tukey' s
estim~t:e
of variance turns out t() be
i,nf(~1.'iQr
tn all three examples
to simple alternatives.
- 1 1.
Introduction
----_._Despite the recent increase in the number of publications on the jackknife
relatively little
9IJan~it~~Jve1Y even
now seems to be known about its
performance in less than large samples.
...
This paper should help to remedy the
situation a little since its main purpose is to illustrate in numerical
terms what can happen to the lower moments of a maximum likelihood estimate
for a single parameter after it has been
j~ckknifed.
We also spend some
time considering estimates of variance of maximum likelihood estimators and
their jackknifed forms, and the problem of setting confidence intervals for
the single parameter.
The only prior publication on the jackknife that seems
to be directly relevant to our study is that of Brillinger (1964).
We have elected to nlake comparisons of maximum likelihood estimates
with their jackknifed forms via the r.oments of their sampling distributions,
but
recogni~e
the limitations that this approach imposes.
The exact
moments of these distributions are nearly always impossible to derive.
this is so we use two lines of attack;
When
firstly, we approximate them by the
first few terms of a series expansion and secondly, we calculate them from
computer simulations of the problem.
Using these simultaneously (or the
exact moments instead of simulation if they are available) also 8ives an
accurate picture of the snmple size needed to validate the series expansion
in practice.
Let us be more specific about our objectives.
The jackKnife is a
procedure that he.s been shmVIl to have certain desirable properties for a Hide
range of problems and been conjectured to have many more.
In Qrder to help
resolve some of these conjectures we will be addressinr ourselves here to
questions about jackknifed maximum likelihood estimators like the fo1101"ing:
(i)
(ii)
how much bias does the jackknife eliminate, if any?
does jackknifing reduce variance and if not, is its net effect on.
mean square error favourable or not?
- 2 -
(iii)
does the jackknife cut down the third moment of the sampling
distribution, and what effect does it have on the fourth?
(iv)
how does Tukey's well-known estimate of the variance of the
jackknifed estimate compare with the standard one and how
satisfactory is it as an estimate of the variance of the
maximum likelihood estimate itself?
(v)
does the distribution of the standardised jackknife estimate
tend more quickly to normality than that of the maximum
likelihood estimate?
(vi)
which estimate, maximum likelihood or its jackknifed form,
leads to the better confidence interval, and how do they compare
with Tukey's method for setting confidence limits?
since the jackknife is a procedure with high
~pplied
potential it
seems to us that questi.ons like these ought to be resolved.
Simulation of the problems in this paper have presented a few challenges
that h:we been overcome.
Second-order moment formulae in principII" should
have been a minor problem, since almost all the ones we need are summarised
in Brillinger's paper.
Unfortunately, however, there do seem to be a few
minor typing errors in that paper and we have thought i.t worthwhih recording
here what we believe to be accurate versions of these
these are
2.
need~d
fOl~ulae.
In particular,
if some of our recommendations are to be adopted in
pr~ctice.
Notation
We shall suppose that we have a random sample of size n, Xl' X2 , ••• Xn ,
from a single-parameter distribution with density function f(x; 6).
This
function will be assumed to satisfy certain regularity conditions that will
become apparent as they are used.
To add to the generality the basic random
variable is taken to be vector-valued, but since the form of the algebraic
- 3 -
results is the same '\lhether X has one component orten, we wi 1.1 not bother
to use the conventional bold-type lettering.
In order to condense the
algebra of the next section we will denote
f(x. ;
1
e) by f. 1
by I
by J
by K
by L
n
by a. and
1
r
~::'g f i +
ae 2
I~) hy b.
r a. by A
i-l
n
and
1
{'",og 'i K}
ae 4
E
i-l
rl"-"-~i
Jl by c.- ar.d
ae 3
J
1
b. by B
_-1
n
.-
"
i=l
1
c. by C
1
-n
r.
by d. and
1
d. by D
h·l
{~
L} by e.
a6 5
1
n
r
and
1
e. by E
i=l
(1)
1
Evidently all of the bls, c's, dIs and e's have zero means;
f(x;
e) is such that E(a.)
= a also.
1
we assume that
In addition, '\Ie shall denote
(1,11) -
-
(a2~:~.i)
2
. (3_~_:_~__~) }by
(1, 1, 11, 11, lll)
(2)
and so on.
This is simply the single-parameter version of a multi-parameter
notation used in Fryer & Robertson (1972).
- 4In this paper. the maximum likelihood estimator of a. a • is taken
n a log f.
to satisfy the equation E
aa 1
O. As far as' the jackknife is
i-I
la-a
A·
concerned. we shall suppose that the basic data is divided (at random if
I'
<
n) into
I'
separate. equal-sized groups with n - rs.
the i th group is omitted is denoted e(i)'
a when
estimate of
The maximum likelihood
Using this
notation the jackknifed estimate, e , is then defined bye. ,{re - (l-!) ~
p
p
r i-I
.!I"
r
..
A
Ea. where the api a{ra i-I pI
.
(r-l~(i)}
8(.)}
are the usual.pseudo-values.
Again, in order to simplify the algebra later on we use A. to denote Ea.
J
1
where the sUllllll4tion is taken over membeu of the jth group. with a similar
meaning for B., C., D. and E••
J
3.
J
J
J
.wment Expansions.
We start 'j1th the maximum likelihood estiruate, 8. The local Taylor
.. a log f.
expansion of E
1.
in a neighbourhood of 6 8\VeS
i-I
aele •
e
.
a log f.1
ao
la-;
• 0 • A + (e - G)
1
..
6 (8 - e)S (D
+
(B - nI) +
I
(8 - 8)2
(C + oj)
1
..
+ nK) + 24 (8 - e)~ (E + nL) + •••
(3)
Inverting (3) about the origin. we find that
(a.. - e) • -a+"
~ +
n
6
1.- + --
'n2 n3
n~
+
0
~"5S]
(4)
A
where a. y
B•
b! + A2J
12
y
a
21 3
!S(AB2 +
I
I A2C)
+ 1'+1
(1 JA2n
2
+ ! U 3) + J2A3
6
215
1
- 5 -
(5)
2r
r
2r+1
In general E(A ) is O(n ) and is of the same order as E(A
) and
similarly for other terms.
To find series expansions for the moments of 8 about 8 we now have
to evaluate terms like E(A 2B).
All of those that are needed for the
.
order we are considering here are listed in Appendix 1.
Using them and
the symbol Ese to denote the series expansion of an expectation (and Vse
for a variance) we find that
"
Ese (8 - 8)
1
= nI z
[(U, 1.) +
pJ
+_1_ [(11,11,1)+ j(111, 1, 1)
n 2 I3
+ _1_
n2 r
t+
[2. J(ll
2
"
1
+2. (111,11)+ H1111, 1) +jiJ
2
1) + 1 R (1, 1, 1) + 3(11, 11)(11, 1) + 3(111, 1)(11, 1)
6"
+ 3J(11,11) + 2K(11, 1) + 3J(111, 1) +
+ _1_
n2I~.'
n J2(1, 1,
1) + 6J (11,1)2 + 15 J2(11, 1) + 15 J3.1 + 0 (n- 3)
2
8
E (~_ 8)2
1
1
1
se
~ nI - nZr + n 2 j3
i
+ n2
E
se
11
i J~l
[2 (11, 1, 1) + 3(11, 11) + 3(111. 1) + KJ
[J(1, 1, 1) + 6(11,1)2 + 12J (11,1) + 1~ J2] + O(n- 3)
(0 - 8)3 = 112I3
1
[(1, 1. 1) + 9(11, 1) +
f J]
(6)
These results, which cover both the discrete and continuous cases, check
with those of Bowman and Shenton (unpublished Oak Ridge National
- 6 -
Laboratory Report) which were derived for the discrete case only by a
very indirect method.
Further terms in the expansions are not a
practical proposition at the present time.
ep , we see that an expansion
Turning now to the jackknifed estimate,
for cap - e) is easily
asse~bled
from (4).
For the present time we will
regard the number of groups, r, as being fixed as n increases, and consider
..
other possibilities later on.
The following expansion of (6p - e) is
enough to determine the second, third and fourth moments o! ep about e to
-2
order n •
A
..
(6
p
a* + - rB*
- e) • _..
n
where
+ •••••
+
Cr-l)n2
a* • A
I
~AjBjJ
13* • f2 [AB -
J
+
2.... [A2 - rA.2)
j J.
213
y*.
+
-
·h r(l + !) AB2 - ArB.2
- 2BrA.B .
jJ
JJJ
.
r
rA.B~
+
ICI + 1) A2e - AT.A.e.
IcrA.2
+
I rA.2c.]
JJJ
j J
+ ~.,.
r
j J
1'1(1 + 1.)
r
1'2
2
- 1.2 JB rA.
. J
J
- i
+
jlJ
J
JA2 B -
3JA
EA.B.
j J J
2"3
2
J rAe B. + 1. (l + 1)
6
r
j J J
KA rA.2 + ! K rA.3]
6
• J
j J
J
(7)
- 7 -
In all of these terms the range of summation for j is from 1 to r.
The bias in a can be found by linear operations on the bias for 0,
p
but to save space we omit the details, and simply quote the final
result.
To derive the moment expansions for
ap we now need to evaluate
terms like E [A 2 r.A. B.J , the maximum likelihood counterparts of which
j J J
were given in Appendix 1.
All of those needed for the expansion (7)
are set out in Appendix 2.
Ese (Op - 0)"
r
1
Ese(Op - 0)2 =:11
,.
E (0 - O)lt
se P
is the coefficient
- - - - - - H2
(r-1)n 2
2
of the n- term in E (0
se
Ese(Op - 0)3
Summarising, we find that
-
0).
r
(r-1)n 2I
+
r
--2 3
+ __r___
(r-1)n 2I It
[(11, 1)2
I
--2
n 3
I, 1) + 6(11, 1) + 3J]
[(1,
(11, 11)
(r-l)n t
+ 2J(1l, 1) +
fJ
3
+ 0(n- )
+ O(n -3)
1
. _._ 3....
+ O(n -3)
(8)
n 21 2
Because there are usually fewer terms in the moment expansions for e
p
than there are in their maximum likelihood counterparte, it is tempting
to conclude that the moments of 0p are therefore smaller in absolute
terms, but this remains to be seen.
In these expansions we have assumed that the nUlnber of groups, r,
is fixed.
However, some might argue that it is more natural to fix s,
the number of observations in each group, and let r
~ ~
as n
~~.
For
this reason we have also traced the algebraic steps holding s constant.
- 8 -
It turns out that the results for fixed s to order n-2 can be very
simply obtained from those for fixed r.
All we need to do is to set
-1
-2
r = n in (8) and retain terms in nand n
Equivalently we can
simply omit the factor (r:f) in the expansions at (8).
Only if we went
on to consider terms of order n- 3 would fixed s show itself clearly.
Of course we are not restricted to fixing r or s as n increases either we could take them both to be proportional to n l for example.
However,
formulae in this case are·much more difficult to derive and we will not
attempt to deal with this case here.
It is probably worth mentioning here that several checks that we
have applied to the jackknife results at (8) have turned out to support
them.
For instance in the case of a Poisson distribution with mean a
when both a and
a
are the sample mean. the appropriate terms in the
p
moment expansions of
a
p
sum to zero as they ought, despite the fact tha-
terms like (11, 11) and (111. i) are non-zero.
We also evaluated the
moment expansions in a couple of very simple cases where we knew that the
second order terms did not sum to zero.
exponential density, f(x) =
E
se
(a
p
a2 + 2r
- a)2 = -n
2
Estimating a in the negativL
-ax
ae
, x > 0, gave for example
a2
+ •••••
(9)
n (r-l)
compared with the moment of the maximum likelihood estimate
a2
~
Ese (a - a)2 = -+
n
(10)
Again. estimating a when the data come from an N(a 2 , 1) distribution
leads to
=
1_ +
r
2
4na
32(r-l)a 6n 2
+ •••••
(11)
compared with
E
se
(8 _ a)2
• _1_ + _1;;.;;5:....-_ _ +
4na 2
(12)
- 9 -
Since the expansion for the variance of 0 in the first of these examples
(~2
is
+
::~)
and (_1_ + _-1-.i4n0 2
64n 2 e6
+ •••
J
in the second, these are
both cases where in the series sense jackknifing reduces both the bias and
the variance of the estimate (r = 2 in the first caseact;ual1y gives
equal variance).
This also seems to be the appropriate place to record moment
expansions for estimates of the variance of
e
and
ep •
The standard
procedure for e!ltimatl,1g the asymptotic varie.nee of 0 is to take the form
~I
We denote this stati stic by VI and this also
and substitute 0 for 6 •
in
~l
alternative estimate for V(O) and V(e ) which we denote by
V2
serves as an estimate of V(e ).
p
Substituting
0p for
e
p
gives an
concentrate her.e on these two estimates together with Tuk~y's welJ-known
.
.',
1
r
estlmate of virlance, ST<; = - - - r (0 • - 0 )2. There ar.e other estimates
~(r-l) i=l pl
P
A
h
'''hieh might be used, for example,
r
and
J.~-l~. {r eh) -
r( 2 ), but these only seem likely to vwrk t<'ell in
P
i=l
r
special circumstances.
D.R. Bril1inger (1966) has suggested the use of
r( 2 )
in certain situations where the bias of
e
is small.
As far as maximum lijtelihuod estimates are concerned ,,,e can shm" that
1£
r
(r-l)
se l. ' - r r'
(l.
i=l
1
e~i)
nI
J'!
1
where - n1 2
[(11, 1)
Z:::9_
[(11, 1) +2
-I-
iJ
is the first order term in the bias of
(13)
e,
and this
of course squares with his suggestion.
Using
E
se
(VI)
expansions for
=·L
nl
+
L {L [..!<2'
n2
13
I:
a
and
0p given previously we find for VI that
+ (111, 1) +
(!.!-L]l)
2
+ (1..!.LJ..,?......!.)]
2
(14)
- 10 -
F.
1
~
se
(V
1
1
- .._)2
"'--
nl
As far as V2
E
se (Vi)
is concerned
",1-+1nl
2
n
+
1.1
(15)
[J2 + 2J(11. 1) + (11. 1)2J
~
f
{1.1 3 [!2
se
1
n
~
(V2 - -1)2
+ (lll. 1)
+
'" E
se
{V
1
(11, 11)
--2--
- 1-_)2 to order n-3 but
•
+
nl
(11. I! l)J
2
[J2 + 2J(1l. 1) + (11, 1)2.1 } + O(n-3)
(16)
4
The moments of 5T2 are a different story. however.
A straightforward
but very·tedious piece of algebra for fixed r produces the result
r (11, 11) 1 -
(5 2 ) .. 1-.. + __l2..(?.r-3)
se T
nl n 214 (r-l)2
E
1 3 + (11, 1)2 + J 2'2 + 2J(l1, 1)]
[21(U, 1, 1) + (1, 1, 1) J .;. 21(11, 11) + 4(11, 1)2
+ 31(111, 1) + 9(11, 1) J + lK + 3
J~
3
+ 0{n- )
(17)
r(2r-3)
r
bY 1·1n t h'"
•.
h
. and rep 1aClng
- - . - by 2 an d" (i-I)
18 expreSS10n glves t e
(r-1)2
correspond~.ng form for fixed s.
the disconcerting feature.
E
se
The mean squared error of
si however
is
For fixed r we can show that
+ 0(n- 3 ), giving it zero estimating efficiency
(52 - 1-)2 ..
2
T
nl
n 2 r. 2 (r-1)
~
in the limit compared with VI and V • Using fixed s on the other hand
2
3
reduces Ese (52T - 1-)2
to order. n- which presumably adds to the case for
nl
using fixed size blocks.
~stimating
Incidenl:dly. several simple checks (for example
the mean of a Poisson distribution) applied to the formulae at
(14) to (17) again proved to be positive.
Finally in this section a word about c!stimating the variance of e
unbiasedly to second order.
e
for
In vbw of (14) we cannot simply substitute
e in
-
L(ll, 1) +
2J"J2)J
~
where Ese(e -e)2 is given at (6).
,
- 11 -
\~e
have also to subtract the second order term in Ese (VI) and when this
is done an unbiased estimate bceor.les
I
{ I
n1'-~n21
5 (11. 11) + -23 (11. 1.
[!<2- + 2(111. 1) + -2
_..l_
+
n213
[J(l. 1. 1) + 3(11.1)2
where 0 is substituted for 0 •
of V(O) using 0
p
+
·~i
1)']
J(l1. 1) + 2J 2 ]}
Similar remarks apply to the estimation
and tC" V(O ) using either
p
ep
or
e.
He could also consider
estimating 0 itself along the same HOM. sinc,e on substituting
J\"e
"J)
"
J
. (11. 1) + '2
in
(18)
get an unbiased estimator of
e
e
for 0
to order n
'
4.
Some Illustrations
We now turr tc some concrete estimation pr... :'lems (which·were selected
more for ease of computation than ror their intrinsic interest) to show
how things can work out ".umerical1y i11 practice.
Before looking at the
details thrdgh. we ought to make it clear from the outset that \;That ',o/e are
doing is comparing the performa:1ce of the
certain situations.
eitl~
t~.·o
It ':oes nol follow that
estimat.ors. 0 and
\>1e
op •
in
would recommend the use of
of them in any particular instance. since a completely di fferent
kind of estimate may be clearly preferable.
~xample
1.
We suppose that X has the special beta density
f(x) • (0 + 1) (0 • 2) x 9 (1 - x)
and that we want to estimate
formulae for 6 and
alog f
-a-oI
1
= (1+9)
0
p
2
<:
and
9 > - 1
.
+ log x
I}
+ (2~O)2 "
<:
1
In order to calculate the moment
we need the fo11m"ing facts
1
+ -(2+0)
= :-a 2 a0loa- f = {_l__
0+0)2
e.
x
"1here ()
(9)
-1
.
-12-
L .. 24{-_1__
(1+0)
-S + _l
__-S}
(2+0)
(11. i) .. (Ill. 1) .. 0,
From this we can deduce that
(11, 1. 1) = -1 2 and so on.
It is qui te simple to show that t
1
0=
n
r
-
i=l
n
log x. = log G say is
1
a single sufficient statistic for 0 nnd that the basic density is a member
of the well-known Koopman exponential class.
estimator for 0 is
giv~n
The maximum likelihood
by
(20)
and this is also sufficient.
Clearly jackknifing here will destroy the sufficiency
property, but this may not be such an [lhsurd thing to do if we cannot find a oneto-one function "f log G \o,it:h suitable! bias ana variance.
In any event as '·le
have said before we are using this estimation problem.only for illustrative purpos('s.
First let us look af". the
selected va:ucs ofn and
-1
the term in n
O.
r(~sults
~Jote
for bias which are given in Table 1 for
that in the series vallJes the contrihution of
completely dominates the bias of
jackknifed estimate is virtually unbiased.
t~e
0, and that as a consequence the
have thought it sensible to si.mulate
the problem too in order to chc,ck on the accuracy of the series results.
The
simulated results for 0 given in Table 1 appear to be pretty much in line with the
~eries
results, though a very large. number of generated samples is needed to
establish this.
For obvious reasons even more simulation runs are needed to check
on the accuracy of the bias of O.
p
Although the results we have here are less rel-
iable than those for 6 because of the prohibitive number of runs needed, they do
confirm the order of magnitude indicated by the series formula.
In all cases the
difference between the series and simulated values is covered by two standard errors
of the simulated estimate.
All of the results in Table 1 assume that r'is fixed and
set equal to the prevailing sample size and this is the optimal choice for r.
Ush::
the series formulae for fixed s (... 1) gives very similar results but they tend to be
slir,htly further aHay from the simulated values than those for fixed r
>=
n.
-13Th" corresponding series (lnd simulated ciJ.lculntions fer the
s~t
variances and mean squared errors of the estimates are also
in Table 1.
out
tel~
Note that in the series results the first order
is
dominant unless the sample sbc! is very small and that jackknifinp.,
which
alw~~ys
reduces variance, leads to useful gains in lfiean squared
error in the smaller sized samples.
All
()f
the simulated values suggest
that t.he series results in Table I are close to the true valuer, of the
second order moments.
We have again quoted results for fixed r • n.
Calculations of both series and simulated
seco~ld
moments ror other
values of r indicate that we should always usc as many groups as possible,
and this is what \ole would expect to happen in general.
The series results for the third moment that we have derived only
to order n
-2
E
give the ratio
se
(6-0)3
;~--(;-=-;) 3
se
a value of 7/ ,
4
The simulationr,
p
however, give somc\"lInt different ratios t"hen n is small and really qui te
diff~rent
values for the individual third moments from the series
results when n < 20, as \>le can see frem Table 2.
entirely unexpected.
This of course is not
Hhether the measures of skewness for
e
Clnd
ep
arf>
similar or not depends on whether the Ir.oments are taken about 0 or about their
respective means.
Taken about E(f) or E(O ) tho measures are very similar
p
especially when n > 6, but the picture ic very
the location, because of the bias in
e.
diffct'C·~t
if
e
is used as
The results for the fourth
moment displayed in Table 3 shO\o,' the same trend but are less pronounced.
Judging by the ca lculations in these tab 1es for the thi rd and fourth
cumulants the distribution of 0
p
can be adequately approximated by a
normal density about 6 for much lowe l' values of n than can the
distribution of
e.
Again in both tebles fixed r is set equal to the
sample size since this produces the best results.
s .. 1 give very similar values.
Results for fixed
-14-
Moving on now to estimates of the variance of
e
and
ep'
we
find some numerical values for expected values set out in Table 4.
-2
The simulated results indicate that the series values to order n
adequately represent the moment in question.
and set equal to n.
In all cases r is fixed
Note that E(V2) < E(VI) < E(Si> invariably both
for series and for simulated calculations.
Comparing the results in
Table 4 with those in Table 1 we see that VI gives the best estimate of
Vee)
si
for both the series and simulation results.
.
appears to have least
bias for estimating E(e - e)2 using the series results. but comparable
bias with Vl when the simulated results are contrasted.
standard error of Si
However. the
is roughly double that of Vl and this ratio goes
VI has sliehtly less bias than V2
up rapidly as r is decreased.
(which is always closest to ni ) as an est~~ar.e of E(e p slightly larger standard error.
0)2 but
There is clearly a case to be made
here for using Vl whichever of the three measuref; wr wish to estimate.
Finally for this illustration we :;ive some numerical comparisons for
confidence intervals.
We are mainly concerned here with comparing the
probabilities of covering the true value of 0 when
methods are used.
approximate
Method I uses the nominal 100(1-a)% level interval
, whe!"e
normal density.
~hree
is the upper 100
I
% point of the standardised
Method II is similar and based on
Method IlIon the other hand uses
ep-+
t
".
To (r-l)
number of groups used in the calculations and t
eP -+
~
f/lrt/
72
~
where r is the
ex
ex
is the upper 100 2" %
2"' (r-l)
point of the central t distribution based on (r-l) degrees of freedom.
There are other variations that we might have tried like
r;:-"
..
or
[e.!.
f/l
ex
y
v
2]
[~
+ f/l
p-
%
~l]
.
(or even modifying the number of sta.ndarderrors used)
2"
but the line had to
-15·-
be drmm
somc\~hcre.
Estimated confi deuce c.oefficients (wi th stanclClrd
errors of generally less than
set aut in Table 5.
5J
%) for various values of 0 and n are
Method I seems to rive sli~htly too mRny 'low'
intervals (given only for
11
..
20) but is
othetwis~1
satisfactory.
Method II is poor for small samples but improves as n increases.
For
n '" 10 the confidence 'coefficient is some 3-4% too low :md there are
far too many 'low' intervals.
In view of the fact: that
ep
is vi rtl1<111.y
unbiased and the comparative results for skc\mess and kurtosis this is
something of a surprise.
It is true that V2
is hiased dO\Ylwards for
V(iJ ) and so replacing i.t by V1 which is biased ulmards l1Iay
p
improve the overall coefficient.
were
~·egi.stered
use fixed r
=:
nut even if all of the improvement
in the '1m',' interval area, it
,vould still produce a
ht~tter
balance.
scc~ms
Hkely that Hethod T
All of the results for He-thad II
other values of r mere'y make
n
','ell
th)nf'.~
worse.
Method III seems to gi.ve satisfactory overall caverare but the split
between too 'hil~h' and too' low' intervals is wOrJ:~'i!lg.
again gives the best results.
Fixed r '" n
In terml, of physical length for a nomillal
coefficient, intt\rvals generated by H(~thod I ar.:- somewhat. longer thiln
those for Hethod II but consi derably shorter than those prr.)(luccd by
Betholl III even when n .. 50.
But this is only to he c':pcctecl in vie'"
of OUf previous results.
Our second illustration concerns the cstimatimJ cf
(rather than the more usual
succc£s in
a Bernoulli
10~
~
odds) where p is the probabil~ty
trial and q ., 1 - p.
In terms of
¢'
pI
•
q
of
the frequency
fune-tion of the number of successes, X, in a single trial is simply
x
f(x' 4»
,
.. __
.L_
(1+¢)
for x .. 0, I •
(21 )
-16The maximum likelihood estimate of ¢ in n trials is given by
A
¢, ..
o/r
(FkT)
for k .. O. 1. 2 ••• (n-1)
where k is the total number
n
of successes.
When k
n
n there is no maximum likelihood estimate. but
if we define ¢ .. c kIn here where c
>
(n-l), then viewed as a whole ¢
is a one-to-one function of kin' which is a single sufficient statistic
for
¢. and is therefore a sufficient statistic for ¢ itself.
The
obvious problem with ¢ in practise is that we have to nominate a suitable
value for c.
To avoid generating several sets of results for different
values of c we have decided here to make comparisons between ¢, and the
jackknifed estimate, ¢ • using a truncation of the basic binomial
p
distribution in the
upp~r
tail.
~~en
the truncation is light. ¢
is not too large, n is not minute and c is not outrageous the conclusions
from the two approaches will be similar.
Of course we have the same kind of definitional problems with ¢ •
p
As we reduce the value of r so this difficulty becomes more acute and
the further we have to truncate the underlying distribution to avoid it.
But the higher the degree of truncation the wider the gap becomes between
the properties of the Lppropriately defined estimate for the fuH
A
distribution and those of ¢ over the truncated distribution.
p
At the
A
same time ¢ becomes less and less a maximum likelihood
subject of this study).
~stimate
For these reasons we use r .. n only for
simply omit the points k .. n. (n-l) from the sample space.
(the
ep
and
This turns
out to be one of those cases where jackknifing with r .. n does not
greatly complicate the
for~m
of the estimate and it is very simple in
J
fact to show that; .. ; [1 - (n-Up
n(n-k-l) .
for k .. 0.1.2 ... (n-2).
A
Note that 0 < ¢ < ¢ • By suitably defining ¢ for k .. nand (n-l)
pp
(and changing it at k .. (n-2» we find that ¢ is sufficient for ¢ too.
p
-17which is not what usually happens when a sufficient statistic is
jackknifed.
The value of ¢
p
at k '" (n-2) needs changing for two
A
reasons.
Firstly, even though
~
p
is not continuous it can destroy
the monotonic nature of the estimate and with it the sufficiency.
Secondly,and more important, it can lead to an outrageous estimate
Consider for example n '" 100 and k '" 98.
of $ with high probability.
~
In this case
cjlp '"
¢ '" 97/ 3 and
'" 49 and $p '" 0.49 though when k '" 97,
97
"6' Hhen
Ij>
is very large we will tend to get large values of k
so in ttiis situation~ with k '" (n-2 )~¢p will produce a disastrous estimate.
In practise we might replace it by
that we comment on later.
when
cjl
k
an
* '" -_._----(n-k+l) ,
lIsing ; '" i at k '"
p
n
or
Ij>
Of course
Ij>
estimate
(n-2)
is small will alrr.ost certainly be a peint in its favour but
the probability of observing that number of successes ,,,,i 11 then he
negligible.
Beca~se
the value taken by
little effect on its moments unless
in any way here sinccwfl always take
takes at
fn
~
cjl
p
at k '" (n-2) will have very
is large, we have not adjusted it
cjl ~
2.
Of
co~rse
the value that ¢
p
-2) will not affect. the rnom('nts when $ is large either provided
that we can then p'~rsuacie ourselves that it is 9. that we are interested
p
in rather than P l!
q
i~ere
is no need for simulation of the moments of
these estimates in this binomial example since the ,,"xact values are
ea&y to compute.
computed
Series approximations are
"lIln{?(',,~sary
too but we have
them (on the ass\wption of a full binomial distribution) for
several va lues of nand q, merely to
s(~e
if m.d ".,Then they adequately
represent the exact values.
Before looking at the details of the sampling moments we briefly
'
d lSCUSS
•
a t h'lr d estlmate
~
* '"
n
(n"':'k+i5"'
. astl.mate
•
ThlS
results from
questionning how one should choose 0 and e: to minimise the first order
(k+e:)
bias of -(n -k+6T
Provided that
0 > 0 the bias of this estimate can
-18-
be written as
[(1;",) { Hl-<5) + f;} + O(n-2)J.
Since we will not know
the true value of '" in practise the solution must be <5 = 1 and
which eliminates
E .. 0
But how does ¢* compare
first order bias entirely.
wi th ¢ ?
P
The typical pattern of behaviour of the bias in our three esti.mates
is shown in figure 1.
We have used $ .. 2 there because the graphs become
more distinct as we raise the value of
$.
Additional information on
bias for", = 0.5 and 1.0 is given in Table 6.
The initial behaviour
of the bias in $ is peculiar and quite unexpected.
First the estimate
is biased downwards, then it passes through zero to become biased
upwards.
It
then reaches a peak when n is still smali and finally
tails off to zero.
The series approximations on the other hand are
positive monotonic decreasing functions of n aud they usually become
tolerably close to the exact values after the initial peak in the
bias has been reached.
The behaviour of the bias in both '"p and ¢*
is quite different from that in
$.
Tl.ay are always biased dmvnw3rd
and their bias tends monotonically to zero as
size.
~<1('
increase the sample
It came as something of a surprise to us to f:nd that the
bias in $* is much smaller than that in ¢.
p
improvement on $.
However both are a great
Putting the initial behaviour of
~
to one side it is
clear that as we raise the value of '" so the bias in the estimates takes
longer (in n) to disappear, just as we might expect.
The patterns of variance of the three estimates are similar however,
though the absolute levels are quite diffl!rent as
and Table 7.
~~e
see from figure 2
First there is a rise in the region of small n, then the
variance decreases to zero after showing a single peak.
term becomes dominant (so to speak) which takes longer as
Until the first order
~
increases,
-19-
the vari<1l1ce of ¢p is much sw.aller than that of
than that of
<t>.
4>
*
and vf>ry much smaller
The key question to ask now is hOH thinrs '\o;lork out
the bias and variance are put tor,ethcr to form IDP,an squared errol'.
~~hen
There
is an initial region for very small values of n \Jhe1'(, ¢ has the least mean
squared error (because of thz behavi.our of its bias).
of its bias begins to 'vane the lead changes hand and ¢
A:; the dominance
*
is
pref(~rable.
But befnre long the smaller variance of ¢p outweighs
the smaller bias in ¢ *
.
and the gap between the two becomes considerable.
way behind both ¢
and
p
rf>
* after
Evidently
<t>
lags alonp,
its early dominance, so unless n is
very small or very large there is little to be said in its favour.
As
e):pected,~dlen
distrihution of ¢
~.,
converges to normality about
p
the distribution of
criteria.
we get mvay from very small val\l(-& of n the
t,
if the third and foureh
This can be clearly
s(~en
mur.h f"ster than cloes
~~ments
fron! Table 8
,,,her<~
ere to be the
vahw8 of these
highcl' ffiOn!<=;nts are shmm before and nfter standardisl!don.
The standard estimates of the
sc~cor:1
we have previously denoted by VI and V2
f)
()2l0r;
take E (·-~-;2-
order moments of ¢ and ¢
that
have ver./ simple forms if \om
= E {(1+¢)-2- k¢-2} over the full binomial distribution,
Lut considerably more complicated functions if we take
into account.
p
Sinee we tal'.f'
d>
t.~lC
to be relatively small the
truncation
~nurr.0rici\l
bet",ecn the two for any particular estitr.·"te ,.ill also be sma]], a,1d
-have opted to scrutinise the behaviour of the simpler estimates.
difference
:;0 \,1e
The
moments of the Vi and s~ though are tnkc,n over the truncated distribution lIseo
previollsly since otherwise ,,'e have the same definitional problems as
A
before.
Very little effort shows that VI
substitute d>p for ¢.
C
A
_$(1+d»
.._.._,-..
n
2
and
Slightly more algebra produces s~
-20-
Clearly V2< VI (when k
t
~
0) anrl VI
C
S~
unless k is very small.
Note
S2
that
T
'~.--
"'- 4 when k = (n-2). Because of these relationshipr. the
VI
results in Table 9 'Jill come as no surprise. VI tends to over-estir~atc
A
both V(O and V($ ) 'Jhilst
p
estimate V(<p).
V~
tends to over-estimate V(. ) but under-
'r
¥-
s~ on the other hand is badly biased up,<ard for all of
the second order moments unless n is InrRc (due to its behaviour for
large k presumably).
Furthermore the stand.ard error of s~ is often
considerably larger than that of VI and frequently several times the
In this binomial problem with the
size of the standard error of V2
choice of these three estimates we would use V2 to estimate V(.p)'
VI to estimate V($) and discard
si
altogether.
Finally, for this example, we have some coverage probabilities and
expected lengths of the approximate confidence ir.tervals referred to
previously as methods I, II and III.
We could develop exact confidence
intervals here of course but this is not the point of
~he
exercise.
o but we feel that this is
The length of each interval is zero when :,
inappropriate for an interval based em the nor.mal distribution (or t) and
so we have also conditioned out this point.
All three methods
usu~11y
gj,ve acceptable overall coverare levels when n > 20, as ,",'e can see from
Table 10, but the split between the number of too 'hirh' and too 'lOlJ'
intervals is very bRd for ead' interval even for n = 50.
The accuracy
of coverage for method II seems to worsen more rapidly than for the other
t,w interval!; as we raise the value of ¢.
This is a pi ty because its
expected length for the same coverage probability is usually shorter than
that of method I and often considerably shorter than that of method III
(c.f. Table 9, also).
This concerns the estimation of the parameter h in a power
series distribution'which has frequency function
-x
x
f (x) = _L.._A_.
(l-e -X)x!
for x = I, 2. 3, ...
(22)
-2]where>"
>
O.
The maximum likelihood estimate of >.., >.., is found from
the equation
-
x (1 - e
->..
)
- >..
o
(23)
and since x is sufficient for>.. the same is true of
>...
There is an
extensive literature on this type of estimation problem and a considerable
amount of work has been done on this special case of estimating A when the
zero count in a Poisson distribution is ruled out.
The minimum variance
unbiased estimate of . A, >.. (which is a multiple of x), has been explored
by Tate and Goen (1958).
The multiplier for
x basically
stirlings numbers which means that it is not simple;
involves
however Tate and Goen
provide tables for it and a recurrence relationship for it has recently
been given by Ahuja and Enneking (1972).
Ad hoc estimates of >.. have been
given by David and Johnson (1952) and P1ackett (1953) among others. The
'" rn
best of these is P1ackett's unbiased estimator >.. * ... r. --~ where n is the
r=2 n
r
nunmer of elements in
th~
sample taking the value r.
This estimate has
exact variance [>.. + >..2 (e>" - 1)-lJ In, and has been shown to be about 97'7.
efficient against the Cramer-Rao bound when>" = 0.5, reducing to a minimum
of 95% or so at >.. = 1.355 and then increasing to 100% as >..
1
Goen also show in their paper that nI
usual Cramer-Rao bound.
<
V(>..)
<
~
"'.
Tate and
1.
V(>.. *) where nI
15 the
In practise ther.e is evidently a strong case
for using>.. * if zero bias is required, even though it is not sufficient
for
A.
However it is possible that the maximum likelihood estimate>" will
I
nI
have variance or mean squared error below 1
and that jackknifing will
reduce it still further to give worthwhile savings over>.. and>.. * , and
this is one of our reasons for looking at the problem.
We have computed
the usual series approximations to the lower sampling moments here and
--22-
and have also checked their accuracy via computer simulations.
All of
the re~;ults for the jackknife use fixed r ... n and this is almost inv,n-iably
opti.mal.
The elements needed to compute the series approxi.mations are
easily derived, but many are quite lengthy in appearance.
(l + 3)" + )..2)
(1. I, 1) = .- •. -d----~-f-·-)..2(1 -
e
+
)
r -)..
For t'xmTIrle
-,
.3e (1+>,)
(3+),,)..
••-_.--.....
_----- .-_.__
.... ---)..
).. (1 - e
)3
.
and
2 ·__....
->, 3
).. 2 (1 - e ) .
+ _h.
4(l + )..)
--'---::~--2
)..3(1 - e
)
(24)
and so to save space will not f,ive the moment formulnc in detail.
Before tu;:ning to the numerical resul ts. ho,,,ever, there are one or
t\"O minor points
th<~t
\"e ought to make about theeB estimates.
= 1 for all i then).. '" 0
note that if
Y..
1
of
calcuJ...tt~_ng
)...
When
which is an 'illegal' estimate
the mClTnents of ), 've have included this f:ample
point to avoid c:omplicr.tions.
val~es
Firstly ,,,e
It is also possible for >'p t::> take negative
os well aG zero (thoufh the probability of this will usually be
very small indeed) and thi s has been treated in 11 Si.t3iJ ar \.,ay.
Finally we
note that S~ = 0 if the means c,f all the sub-groups (t-lhen jackknifing)
are the same, and
confidence intervals of zero length have been excluded
in the simulations.
Some series bi.as results for n • 5 are set out in Table 11.
Evidently
the fi rst order term for).. is dominant leaving the jackknife virtually
unbiased, but the overall level of bias in ).. is small (less than 5% of value of )..)
-23-
even when n '" 5.
So although jackknifing reduces the biAS by 95% or so,
the total saving is scarcely worth the effort, and as n increnses the
Being so very small the bias in l
bias rapidly npproaches zero.
is
p
difficult to establish by simulation, but the results that we have tend
to support the series level for it.
Similar remarks apply to A.
A
A
Some calculations for the sC'cond order moments of land
n .. 5 are gIven in Table 12 and they arc typical.
.
).p
when
The results are
disappointing, principally because of the dominance of the first onll'r
term.
The mean IH]Uared error. of l p is always above the Cremer-Rao
bound and exceeds both the variance nnd mean squared error or l for
small values of ),.
By tl.e time
on bot.h the bias and variance of
E
~~ -A)2
E
(~_),)2
From then on ...~.i':.....}?__..
se
;:. 0.99
~
• 2, however,
af;
~ increases.
,~hen ~
second order terms in V (~) and E ().-~)
sa
se
overall level::: below the Crmner bound.
A* is some 13% at
~
p
is cutting down
l, thoueh by very little in Absolute terms.
the behaviour of l is worth commerltinf: on
over l or
~
It does secm to us that
is very small.
are negative, so takinp, tIll,
The efficiency improvement in l
= 0.1 when
11
5 and may be worth the
effort (perhaps it is \-lOrth seen"ching for the minimum
estimate in general!:).
HCJ"e the
mCl!D
square.d error
Simulated results that we have [or these second
order moments suggest tho'lt the seriE:s values nlay be a frac::ion
In view of the size of the second order moments of
~
and
Im~.
l
p
it may
be argued that there is no longer any point in continuing our study of
this problem.
1!00vever, as we said at the beginning our main objective is
to compare land l , not to contrast them with other €:stimates, so we
p
will cOr.Jment briefly on some
other features of our two estimates.
Series
calculations indi cate that in the smaller rdzC!<1 samples the third moment
-24-
of A about A is almost always at least double the size of that of A
p
and often much more.
Simulations for this third order moment which
we believe to be quite accurate show that E(A
p
- A)3
>
E(~ - A)3 also,
but indicate that the gap is nowhere near as wide as the series
calculations suggest (usually less than double).
Standardising this
moment by dividing by the appropriate power of the mean squared error
leaves these conclusions virtually unchanged.
It may be worth noting that
these third order moments are very much more in line when taken about the
. respective means of the estimates.
to order n
-2
Series for the fourth order moments
are identical of course.
The simulations that we have run
here suggest that the leading term in the series is a very good approximation
to the true value of the
3
momen~.
Standardising them (using
J typically) shows that judging by the size of the fourth
cumulant: the distril>ution of A converges much faster to normality than
p
does that of A and likewise if the moments are taken about the means of
the estimates, but we must remember that for third order moments we came
to the opposite conclusion.
We have also run out a few simulations on the distributions of Vl.
V2
<
and s~.
E(V 2)
<
The tentative conclusion from these seems to he that E(Vl)
E(Si) and that all three are biased do"~wards ~enerally though
not usually by much.
Also, although
si
has the least bias, as usual it has
the largest standard error, it frequently being ten times larger than that
of Vl or V2 when n is small.
This makes the mean squared error of S2
T
several times the size of that of Vl or V2 as a rule.
Finally a brief look at the approximate confidence intervals.
Overall coverage probabilities are generally quite accurate for all three
methods, though method I does seem to be slightly inferior to the other
two most of the time.
For example with A .. 2 and n .. 10 coverages for
-25method I are 82.4% (nominal 80%),
~O.SZ
(90%) and 92.00% (95%).
Method II on the other. hand gives 79.80% (80%), 89.841' (90%) and
94.10% (95%) and method III 78.16% (80%), 89.96% (90%) nnd 94.46% (95%).
~vh(m
The t<1il split is less satisfactory, hOlv€ver, and
n ., 30 and A '" 2
with nominal confidence level 90%, method I splits the too low and too
high intervals as (7.40%, 3.72%), method n i.s (7.40%, 3.927.) and
~
method III as (6.34%,4.00%).
Because E(S.P > E(V2) > E(Vl) the
expected length of intervals for method III is almost certain to be
larger than for the other two for a
Hx~rl
conficlcnce leve 1.
wi th A ., 2,
n = 10 and a nominal confidence level of 90% \·w f.ound the expected lengths
for the three methods to be 1.6303 (I), 1.6360 (II) and 1. 7871 (III).
Raising n to 30 produced 0.9495 (1), 0.9505 (II) and 0.9761 (III).
Host of this \vork
1I.'ilS
carded out 1Ilhilst
the Biostatistics Department,
Cha~el
Hill,
~'!e
~orth
wert'; visitor!> at
Carolina.
We would
like to express our tharlks here for support from Institute of Genern1
Hedica1
Sci(~ncef:
Grant G.H.-12868 ',hilst: we \-lCrc there.
II" addition
one of us (J.G. Fryer) would like to thank the Fels Research Ingtitllte,
Yellow Springs, Ohio for support during the sumffier of 1972.
his
~vork
was ·carried out whilst
~e ~vas
a \'initing
Fdlm~
Part of
at that Institute.
Ahuja, J.e., and Enneking, E.A. (1972)
Recurrence relation for minimum
variance unbiased estimator of a parameter of R left-truncated
Brillinger. D.R. (1964).
The asymptotic behaviour of
Tu~:ey's
general
method of setting approximate confidence limits (tIH' jackknife)
when applied to maximum likelihood estimates.
Rev. lost. Int. Statist.,
32," 202-6
.... -- ..
""""""'
...~-
---._--_._--~-----.---._._--
Brillinger, n.R. (1966)
-
-
The application of the jackknife to the
David, F. N., and Johnson, N. L. (1952)
Fryer, J.G., and
Ro~ertson,
C.A. (1972)
The truncated poisson.
A comparison of some methods
for estimating mixed normal distributions.
Plackett, R.L. (1953)
.~.i_~~!LtED~a,
12.,
639-48
The truncated poisson distrihution.
Tate, R.F., {md Goen, R.L. (1958)
Hinimum variance unhiased estimation
~y~~c}i~"'!
(i)
(ii)
(iii)
(iv)
(v)
(vi)
(vii)
(viii)
(ix)
(x)
(xi)
(xii)
E(A2) = nI
E(AB) = n(l1, 1)
E(A3) .. nO, 1, 1)
E(A 2B) .. n [(11, 1, 1) + 1 2]
[(11,
E(Al\2) .. n
11, 1) + 2I(U, l)J
E(A 2 C) .. n [(111, 1, 1) - IJ]
E(A~) = n(l,
I, I, 1)
+ 3n(n-1)I2
E(A3B) =n .[(11, 1, I, 1) + 1(1, I, 1)]
E(A3C)" n [(111, 1, 1, 1) - J(I, 1,
E(A2B2) .. n
I»)
+ 3n(n-l)I(II, 1)
+ 3n(0··I) 1(111, 1)
11, 1, 1) + 21(11, 1, 1) + 13J + n(n~i) [1(11, 11) + 2(11, 1)2- 1 3]
[ell,
+ 3n(n··I) 1(1111, 1)
E(A3D) = n [(1111, 1, 1, 1) - K(l, I, 1)]
E(AB3) .. n [(11, 11, 11, 1) + 11(11, 11, l) + 312(i1, 1)J
+ 30(n-l) [(11, ll)(l1, 1) - 1 2 (11, I)J
(xiii)
E(A 2nC)
n [(111, 11, 1, 1) + 1(111, 1, 1) .. J(ll, 1, 1) - 12JJ
=0
+ n(n-1) [2(11, 1)(111, 1) + 1(111, 11) ... 12.1J
These formulae ~ce easily established.
E(A2BC) = E
{C1:
i
a?
+
l:
i,j
ifj
1
a. a.] [1:
b. c. +
j
J
1
Co~sider, for exa~plp.,
1
1
Because the variables a., b. and ..-:. have zero mE-ans \ve
1
E(A2BG) .. E
[1:•
1
.. E
a?
~
b. c. +
1
1
[~ [~_!-:.:-~
i
1
r
2
[a 10£
ae 2
1
l:
a? b. c. + 2
1
i,j
i+j
~.i_
+
J
I]
J
1:
i,j
ifj
[.a310g_~i
ao 3
Clln
write t:Ji" as
.J
a. b. a. c J.
1
1
J
- Jl
)
which reduces to
n [<111, 11, 1, 1) + 1(111, .1, 1) - J(ll, 1, 1) - 1 2JJ
+ n(n-1) I
[(111, 11) + IJ ] + 2n(n-l) (11, 1) (Ill, 1)
Appendix 2
----...-,.
(i)
(ii)
(iii)
(iv)
E(I:A.2) ... nl
j J
E (A EA. B.) .. n [(11. 1. 1) + 1 2]
j J J
R(A.I: A.2) .. n(l. 1. 1)
j J
E(AB.I:A.B ) .. E(EA.B.)2 .. n [(11. 11., 1.1) + 21(11, 1, 1) + (s-l)I(11.. 11)
j J J
j J j
+ 2(s-1) (11, 1)2 .. (s-2)1 3J
+ n(n-s)(ll, 1)2
(v)
E(A.B.I:A.2) .. E(A 2 EA.B.) ... E(EA¥.I:A.B.)
,
j J
j J J
'j J j J J
.. n [(11, 1, 1; 1) +1(1, I, 1) + 3(6-1) 1(11, I)]
+ n(n-s) 1(11, 1)
(vi)
E(A.r.A~)" E(EA~)2
j J
j J
... n [(1, 1, 1. 1) + 3(s-1)1 2]' + n(n-s)I2
(vii)
[0 i,ll, I,
E(A2.EB~) ... n
j J
1)+21(11, 1, 1) + (5-1)1(11, 11)
+ 2(5-1) (11, 1)2 - (s-2)1 3]
, + n(n-s) l.'-1(11 , 11) - 1 3],
(viii)
(ix)
(::)
(xi)
E(A.I:A.B~) ... n [(11,
• J J
J
11, 1, 1) + 21(11, I, 1)
+ (s-1)1(11, 11) + 2(s-1) (11,'1)2 -
1
(s-2)1 3
E(AEA?) .. n [(1, I, I, 1) + 3(5-1) 1 2]
j J
E(AI:A.2B.) ..
j J J
T1
[(11, 1, 1, 1) + 1(1, 1. 1) + 3(8-1) 1(11, 1)1
E(A 2 rA.C.) ... E(AC r.A~)
j J J
j J
... n [(111. I, 1. 1) - J(1. 1. 1) + 3(s-1) 1(111, 1)J
+ n(n-s) 1(111. 1)
(xii)
E(A E A. 2C.) ... n [(111. 1,1, 1) - JO.1. 1)+ 3(s-1) 1(111,
j J J
1)]
Again. only simple algebra is needed to evaluate these moments.
E(ABEA.2)
j J
= rE(A. 3B.)
J
J,'
For instance
+ dr-1) E(A.2)E(A. B.)J
J J
and terms like E(A.3B.) are available from Appendix 1 wi.th n replaced by s.
J J
e
e
e
~
Table 1
Some Numerical Comparisons of the Series Representations for the First Two Moments
Moment
A
E (a -a) to
se
-1
order n
Sample
Value
of e size (n)
A
A
Ese(a - e) to
-2
order n
E (a - a) to
se p
-2
order n
Var
se
& Var
A
(a)
(6 -
A
A
(a p )
se
-1
to order n
se
_?
order n -
E
aho
se
-2
order n
E (a - a)2 to
se p
-2
order n
Var
(8) to
0
2
6
10
20
0.1200
0.0720
0.0360
0.1298(0.1247) -0.0117(-0.0207)
0.0755(0.0745) -0.0039 (-0.0048)
0.0369(0.0377) -0.0009(+0.0001)
0.1333
0.0800
0.0400
0.1867(0.1997)
0.0992(0.1022)
0.0448(0.0449)
0.2011(0.2153)
0.1044 (0 .1077)
0.0461(0.04.63)
0.1679 (0.1822)
0.0915(0.0938)
0.0427(0.0427)
50
0.0144
0.0145 (0 .OUa) -0.0001(+0.0002)
0.0160
0,0168(0.0166) O.Ol?(j (0.0168)
0,0164(1),0163)
6
10
20
50
0.2912
0.1747
0.0874
0.0349
0.3150(0.3248)
0.1833(0.1884)
0.0895(0.0869)
0.0353(0.0386)
-0.0286(-o.f267)
-0.0095(-0.0097)
-0.0023(-0.0052)
-0.0004(+0.0003)
0.9600
0.5760
0.2880
0.1152
1. 2903(1. 3993)
0.6949(0.7357)
0.3177(0.3261)
0.1200(0.1219)
1. 3751 (1.5048)
0.7255(0.7712)
0.3254 (0.3337)
0.1212(0.1234)
1.1635(1.2515)
0.6438(0.6750)
0.3041(0.3099)
0.1177(0.1195)
0.4961(0.4918)
0.2886(0.2737)
0.1409 (0.1472)
0.0555(0.0475)
-0.0455(-0.0611)
-0.0152(-0.0347)
-0.0036(+0.0032)
-0.0006(-0.0008)
2.4590
1.4754
0.7377
0.2951
3.2891(3.5421)
1. 7742(1.8231)
0.8124(0.8291)
0.3070 (0. 3073)
3.4990(3.7839)
1.8498(1.8981)
0.8313 (0.8508)
0.3101(0.3095)
2.9629(3.1597)
1.6434(L6741)
0.7775(0.7899)
0.3013(0.3016)
=roo
4
6
10
20
50
0.4582
0.2749
0.1375
0.0550
\
Note:
The figures in parentheses are the simulated values.
r is fixed and set equal to n in all cases.
As far as a
p
is concerned
Table 2
.
Values of the Third Moment and Measures of
Value
of 8
-
..-
Sample
Size
..
~
E(e
[E(6 - 8)2J%
lE(8
- 8)3
P
A
p
--_._----_ ........-.....
--J
'''''''-'-
Moment or Measure
E(e - 8)3
E(e - 8)3
E(6 - 8)3
(n)
-
p
E(6 - E(6))3
[}:(a - E(8))2J X
2
3
- 8)2} I~
•
A
E(8
- E(6 ))3
.p
.p
[E(8
3
- E(8 ))21%
p
p
-
0
6
0.1120
0.2194
0.0640
0.0967
2.1966
1.2434
10
0.0403
0.0580
0.0230
0.0292
1.6410
1.0151
0.0101
0.0124
0.0058
0.0068
1.2458
0.0016
0.0017
0.0009
0.0010
0.7847
6
1.9569
3.6324
1.1182
1.6674
10
0.7045
1.0983
0.4026
0.5663
20
0.1761
0.2249
0.1006
0.1231
20
I
50
I
II
2
50
0.0282
0.0331
. -.
7.8872
14.7205
I
!
6
I
0.4613
1.0629
0.7660
0.7701
0.4533
0.4571
-
I
1.9678
1.1910
1.3500
1.2636
1.6215
1.0212
1.0706
1.0570
1.1666
0.7135
0.7473
0.7418
0.7641
..
0.4711
..._._.
0.4454
0.4442
1.1904
1.4065
1.2958
0.9741
1.0480
1.0558
__ ._M .__ .
-_.
-._.~
4.5');0
6.6860
1.9999
-_...........
•
_-~-.-._.-
_ _ _ 0.
"--
--
0.7099
0.9355
0.4056
0.5393
1.1921
0.7682
0.7500
0.7577
50
0.1136
0.1164
0.0649
0.01';'\1
0.6760
0.381.0
0.4259
0.4283
T.~ble
3.
I
6)3 is the serie::, result to order n"
-
2
and the second gives the simulated value:
The quanti ti es in the other fou:: columns an, calculated from Lhe simulations.
In both of th(>SI'
I
..
20
-6) 3.
II
I
1.5670
(6p
I
I
1.6225
2.1100
simi-larly for E
e
0.7721
I
1.0644
2.8394
4.0975
The first entry for E(8 -
rc,m:lrks apply to
0.0161
0.0195
,
1.3937
10
4
Note:
I
I
1.5994
~lll)lE'S
r
i~ fiXNl
<:lnci s('t "quill to thC' s.,nm1C' size in (':\,.."
e
Simi la~
and
'IlSC.
e
e
e
e
Table 3
Valt!el> of the Fourth !loment and Measures of Kurtosis
iI Value
of
6
II Sa~p1e
I Sl.ze
I
.
i
!I
l
,
'
I
6
I
I,
0
10
!
20
I
I
i
;
I
~
,
!
!
1
-0
I
i
I
!
10
20
i
I
~I
,
50
;
=-==4--.
I
II
Ii
1
I
:;
:\'
15.9551
I
!
2
I
0.0048
0.0098
0.0008
0.0010
-__.._
2-• ~648
I
!
~
1
II
0.0192
0.0671
E(e p -6)
!!
--:
II 0.0533
0.2602
I 9.1721
I: 0.0192
0.0452
! 0.0048
0.0075
j
4
20
50
I
I
_
I
0.9953
3.5496
0.2488
0.5072
0.0398
0.0008
\
3.6482
-I 0.0009
'
~":.-=-:-----J
! . ~.
, 7-l8
~
I
I
!
(-
,9.1698
II
0.9953
2.3980
!
0.2683
I 0.3966
i 0.0398
:\-, l~~~~ :1:: : :
10
II 5.7844
I 4.5612
!
I
107.977(
163.5952
6.5305! 6.5305
21.4185
\ 14.9475
1.6326
3.3721
0.2612
0.3317
I 1.6326
I_ 2.6946
II 0.2612
0.3004
II
Moment or Measure
_"~__
I
~
-
[E(e-e)2J2
I i !
6
I
0.0533
0.4250
jl
I
I
I
. 'J~-eL
J
E(e-e)4
(n)
!
----
- --,
F( 8 -A.)4
P
E(~
E(e~»4
-
I
i
[E( e -9)2"JI2
p
I
rJ-
I
f
I
I
I
I
,
!
I
I
I
i
I
4.1246
-r(9 P»212
.
I
1
8.3975
8.1311
5.1085
5.2095
[
,!
i
4.1185
4.1225
i
3.4664
I
4.1292
I
3.4465
I
-ri
!
6.3697
5.3335
I
5.3246
!
/~.1740
i
I
I
1
II
,
I
3 3029
I,
I
!
I
I
5.9784
5.3144
I
I
!
II _
3.4305
I
_
4.1572
!
T----·~------j
6.7220
I
6.5551
5.3458
I
5.4503
I
4.3190
I
i
3.4335
-_....:...
i
i
!
I
I
6.2079
I
I,
3.4639
J
I
5.2636
i
i
3.4570
-r-5.8.546
I
I
p
!
I
II
'-
.
5.1412
;
i
rE(~
-
I
I
P »4
i,
;
t
P
rE(e-l':{e-;-)2]2
.
7.8373
I E(e~ -E(8~
4.2632
3.3233
II
!
I
4.3085
3.3300
I
Table 4
Expected
Vdues ..----_.
and Standard
Errors of __
Estimates
_
---------_
__ .. --._--_
_.•.
....
...
Selected Values of
e
of Vad.ance for
...-.._----~--~---~._"._.---
and n.
---~_.-.-_._--_._---------
I----------~.
Value
Sample
of
0
.--_._---- Size(n)
... ..
_
--- ---------t_
..-
o
2
0.0948
0.0971(0.0553)
0.0844
0.0849(0.0502)
0.1116
0.1173(0.1300)
20
0.0437
0.0439(0.0165)
0.0411
0.0411(0.0157)
0.0476
0.0479(0.0303) I
0.0209
0.0209(0.0055)
0.0203
0.0203(0.0053)
0.0219
0.0220(0.0096)
=::"..:;..~--=-- ~....~r--- ...::.~.:.--=-~ __.. ~:--'":.~:.%.=.;..'T""._.:: . ::~::::1~...:-.:
0.6660
0.6728(0.3500)
0.6049
0.6009(0.3180)
0.7684
0.8074(0.8280)
20
0.3105
0.3116 (0 .1070)
0.2952
0.2949 (0.1010)
0.3345
0.3396(0.2090)
40
0.1496
o •1500 (0 .0354 )
0.1458
0.1460(0.0345)
0.1554
0.1561(0.0599)
10
1.7004
1.5492
1.9567
20
0.7940
0.7562
40
0.3041
0.29,)u
J
---------------_._._---_._--Note:
0.8540
0.3133
._--------
e = 0, 2 the first entry in each cell gives the series
re_sul t to or.der n-2.
The ~econd is the simulated value
and the standard error of the estimate of vad Rn~e is eiven
in parenthesis. Only the sed es results ar.e eiven when
o = 4. We set r = n in all cases.
For
II
'r';':;::':;;'=-=---=:':"=:_-_"':-'::;:::"::':':':';"::::1
10
.
4
E(S~)
10
40
:'"-=-"==':=--
Moment
-----.-----
Table 5
Method I
Value Sample.
of e Size
(n)
~
..==.-
0
90%
95%
80%
90%
95%
80%
90%
95%
level
level
level
level
level
level
level
level
level
86.99
91.93
81.38
90.28
94.65
88.35
93.82
80.42
89.99
94.74
(15.11)
(8.93)
(5.24) (12.84)
(7.09)
(4.18)
79.47
89.26
94.16
90.04
94.83
..
_...at
91.05
95.33
20
80.35
90.32
95.05
00.47)
(5.74 )
(3.48)
90.16
95.21
20
40
Note:
;.;2;;;J2.
81.17
80.52
.•
-
..... ~~
10
lD
Nethod III
.
"80%
=-
40
2
Method II
-
-
--
. u ..~
76.53
I 78.09
- - .-..
-
.-..•
80.89
-.-..
.-
80.97
90.,)7
95.25
77.43
87.04
92.14
81.28
90.19
94.65
79.40
89.77
94.82.
78.14
87.87
93.49
80.06
89.52
94.43
(11.39)
(6.35 )
(J.68)
(15.10)
(9.38)
(5.44) (13.57)
(7.66)
(4.34 )
80.63
90.21
95.12
89.62
94.45
90.0l1
94.95
\80.15
80.78
I
The first entry in each cell gives the percentage of intervals covering
the true parameter value. The second figure given only for n = 20 is
the percentage. of intervals that fell totally below that value and so
is a guide to the sYmmetry of the technique.
Ta::>le 6
Bias in the
·-rr----
r-
,~
I
I
Value
i
of
!
4>
0.5
It
I
F.
.xac~
I
II 10
1.10.1123
I
Series to
-2
order n
0.0939
II
.J_~_,_t~~~lS6
I
I
of nand ¢
Bias in
I "'t.xac
. t
~
0.0750
I
. -0.1782
0.0900
-0.041':1
0.0375
II
0.0412
-0.0055
II
~.015~_~.0156, -0.0007
I!
I
r~-r~~~;;-~-~3;~ 0.5000 r -0.5959
I
1.0
10
\
. 20
~
0.2150
0.2000
0.2600
-0.2526
if 0.1195
0.1000
0.1150
-0.0299
Ij
I;
11 __. _ _ _ _ _ . _
'! -0.5784
2.0
10
II'
I
I
I
I
Exact Bias
.
to
l.n
-2
¢
*
I
I
-0.0500
I
-0.0419
-0.0167
I
-0.0015
-0.0039
I
-0.0000.
-0.0006
I
-0.2000
-0.0667
", I
-o.oo~
-0.2632
I
I
-0.0158
-0.0444
-0.0002
I
I
I
I
I
_._
'1\-----
I
ord~_ rL
! 0.1667
I.
¢p
Series
0.1250
'j
II
I20 "!i 0.0419
.--1
e
I
t-~n) _,;~_____ ord~~_
6
I
¢
I
Series to
-1
v~lues
for selected
-
Bias in
'
'I'
r
I
I!
s:?ple
I
u1ze II
i
·
II
estim~te~ o~ ~
I
\
II 0.1329
20
11
0 . 36 8 8
50
II
0.1343
1.0000
I
I_~~~~
0.6000
0.3000
1.8333
I
-1.5837
_
I
-1.0000
-1.0148
t
~3~~_. ~O~0176 i~O.0122
-O,~.OOOO
0.9000
-1.1133
0.3750
-0.2770
_..
e
I
I
-0.3333
-0.4355
-0.0789
-0.0287
I
__ .
e
e
e
e
Table 7
•
Variance and mean squared error of $, ¢
-r
I
Value
of
pr
Sample
Size
~
*
for selected values of $ and n.
~
Second Order Moments of
Exact
(n)
r
~
4>
i
Variance & M.S.E. of
.
4>
p- Variance and M.S.E.
Series to
-1
order n
Series to
-2
order n
I
Exact
0.1875
0.3438
0.3594
!
0.0322
0.0640
0.2~50
0.1259
0.1276
0.1688
0.1744
0.0749
0.0767
0.1250
0.1213
0.1213
0.0703
0.0717
I 0.0564
0.0592
0.0564
-
0.0595
0.0595
0.0248
0.ll250
0.0228
0.0228
0.0230
0.0230
0.0230
0.6667
1.5556
1.6667
0.0223
0.3774
0.9333
0.1673
0.2366
Series to
of
4>
order n -2
*
6
0.2783
10.2909
10
0.2116
0.2204
20
0.0752
10.0769
50
. 10.0250
0.0252
6
0.4139
r~'4130
10
0.7115
10 • 7577
0.4000
0.7200
0.7600
0.1015
0.1653
0.4889
0.3235
0.3254
20
10.3362
,0.3505
0.2000
0.2800
0.2900
0.1800
0.1809
0.2211
0.2241
0.2241
50
10.0953
0.0971
-:--
0.0800
0.0928
0.0944
0.0823
0.0823
0.0833
0.0835
0.0835
6
0.3797
0.7143
3.0000
10.0000
11.0000
0.0154
2.5235
5.4000
0.1427
1.1725
10
1.2711
1.2887
1.8000
4 • .>200
4.(;800
0.1200
1.3595
2.6000
0.4850
0.6747
20
1.8908
2.0268
0.9000
1.5300
1.6200
0.3872
0.:'639
1.0895
0.9161
0.9169
50
0.5013
0.5193
0.3600
0.4608
0.4752
0.3708
0.3711
0.3894
0.3939
0.3939
0.5
1.0
~-
2.0
Note:
and
.~
0.1125
0.0562
0.0225
-
-
-
-
-
-
I
I
-
-
-
-
-
...,
-
-
-
-
The first entry in each cell is the variance and the second the mean
squared error.
I
Table 8
Third and fourth moments of ¢ and
¢ and
___________________.__
__.--..P_
.. measures
.
.of
. . skewness ana
. kurtosis
lva1ue
san~Ple
II
<.
I
CAo
~_.
.'
If: S~~e
I
J
_<n!
I
6
I I
I
0.5
!
I
I
10
I
I
I
I
20
I
50
. 1.0
!
i
,
f=
i
I
II
20
50.
I
10
t
e
0.0396
0.0073
l
I
I
i
1.8633
2.8569
0.0150
I
1.8565
I "
0.0042
I
{E(¢ _¢)2};2
\
;
i
I
1.0445
II
-1.4223
I
1.1225
0.7215
!
I
-1.0881
0.2923
.
l
'I
.I
0.1588
!
:
1
-0.2523 i
0.5965
1.4497
6
I
I
0.5706 i
I
I
i l
I
!
i
-0.0889
I
0.~4~4
-0.9313
1.3267
0.0710
I
0.0028'
7.7239
50
0.7520
i
2.1979
i
I
; 0.3403
I
I
-1.3235
4.0220
1
\ 2.0473
I~
1
2.7497
!
I•
O.OC~~
i 0.0182
I
0.9225
;
!
;
0.0021
•.•
..__ . -.-
0.1768
.
'\
I
I
.
-1.7931 ,.
-0.2805
_L
I
0.2563
I
I
-1.5425
I
0.9068
I
1
I
2.6767
2.0093
e
i
.
!
II 8.7001
\ 4.4301
...:.--
I\
I'
1
,
0.1493
1.9866
\
!
!
I
I'
II
1.2411
"
I
2.1742
I
1.0236
,43.~:::
I
110.7080
~.8147 j 9.8284
I
I,
1.3562
I
I
I
4.5605
4.7259
!
I
4.05'37
I
2.4109
:.4630
I
5.7145
!
2.5066
I
I
5.5321
I
-0.8876.
2.8519
"
1
!
_
I'
I
116.6637
I 4.0043
"
JI
2.3053
'
I
2.5096
-1.1.312
!
!
I
i
7.0053
6.5184
\1.2806
. ..-!.
I
II
I
-1.0089
1.1337
y
I 5.2702 I
!14.6330 I
I 0.0594
II
E(;p-O>"
,'-'-A---
'I
, {E(,;.-¢)2}21 {E(¢ _¢)2}2
1
'
.
l
L~~_~~~:-82_:~L9_:.O!2~ i--~.~320
! -4.0446
---;;-----
----J
' II
:
II
.
20
i
I
P
I
0.0515
II
E(;-¢)4
II
E(q> _q»Lt
i 0.7108 ! 0.0168
0.3454
I
I
I E(q,_¢)4
I..
!
:
0.0025!
I
+-_!
I
!
:
L
2.0 I
I
-0.0230
;
I
+-I
I
I{E(<fi-¢)2}12
.
I
i I
"I
10
11
P
!---;;:------:{I
- - ' . t--==~="'---:::-::::':-"-_'::::-:::-=-';-'-=:;:'~C::::
:F==:-::---;---::-=::-;-·=-:-:t;:-=-=-7.'-~::::-:,::::·::·:::
.....
-·;·=----:-:=-::;::.:::::::::-:;--=~t
I
_!
lib
\
I•
j
L
.L
II 0.2923 I
I 0.2956 !
i
I
.
!
-A--~-
!
!
I·
Ii ElO-o>'
.~ ~---~-II--:(;-$~I-~~-~p~¢)3
-l-~-~--·-··--A-.-,I E(O -0>'
2.1508
l5.9143
e
I
Table 9
Estimates of Variance and Bean Squared Error of ¢ and
=.:..-~=-
Value
of ¢
-..--.----...----- .-.-.-.-.~:~..::==~.:::~-::=--:=======--==--=-.:;:--
If>
•
-----'--P..----A;:------ -
Noment
V(¢)
Sample I~--·---r-------___,-----__i and
Size
H.S.£. (4))
V(4'p)
and
N.S.E. (4) )
A
A
A
A
E(Vl)
E(V2)
E(Si)
10
0.2544
(0.6569)
0.1244
(0.1279)
0.3647
(1.6963)
0.2116
0.2204
0.0749
0.0767
20
0.0795
(0.0975)
0.0658
(0.0642)
0.8281
(0.1372)
0.0752
0.0769
0.056/.
0.0255
0.0241
(0.0143)
(0.0131)
:-=---:------- =--=----. ""'.'"===-"::==' 1=.:.__....
0.0256
(0.0146)
0.0250
0.0252
0.0228
0.0228
(n)
0.5
50
1.0
~
I
I
I
10
20
..__ ... 50
10
2.0
20
..
--~
•.
..
1.0958
(2.0659)
0.2717
(0.1971)
2.2167
(5.9471)
0.7115
0.7577
0.3639
0.2368
0.4552
(2.3209)
0.3362
0.3505
0.1005
(0.0731)
0.0953
0.0971
1::::::1 ~:::::)
==~~~.~_~~:~. . _~l __~~~_~:.~~~ -=J.
-
- - --r--
0.0564
I
0.1015
0.1653
I
0.1800
0.1809
0.0823
I 0.0823
3.2428
(3.6689)
0.3703
(0.2322)
7.9764
(11.2779)
2.4842
(5.7664)
0.7807
(0.6078)
5.0032
(18.6861)
1.8908
2.0268
0.3872
0.4639
0.5145
(0.6498)
0.4171
(0.3822)
0.,5701
(1.1518)
0.5013
0.5193
0.3711
____J_5.~ _
Note:
p
--------+----.------/-------.-.;:..---+-----+------t
•
.._
•..•
I
1.2711
1.2887
r,.3708
_ __ .________ _
The first entry for the last two columns is the variance of the
estimat.:.. and the second the mean squared error.
The figure in
A
parenthesis for the E(V ) and E(Si) columns is the standard error
i
of the estimates of variance.
I
.__ .J.
Table 10
~---
Value
of
,...._---.,----_._-------,-- ._--Sam le
Method I
p
_._---_.
-Size
80%
90%
957(n)
level level level
!----+---+------..,-------- .
0.5
Nethod III
-9'5'%"'-r3"6% -90"%80%
90%
-----
----~._._._.
1'letho dII
level
level
.
level
95% -level
7 91.17
71. 31
28.69
3
8.83
5---_._1.25
1.21
91.17
8.83
1.60
91.17
8.83
1.98
84.89
93'9~
leve 1
level
_w...-...._ .... .._.
3
84.87
15.13
0.93
-"~-'-----'.~--
1.0
20
50
--
84.87
15.13
15.13
._-
6.02
I
78.61
17.15
0.40
89.16
10.35
0.52
94.27
5.7LJ
0.62
78.26
21. 74
.• 1.90
82.71
17.29
2.82
82.71
17.29
3.73
94.57
5.43
4.60
86.82
13.18
1. 74
86.84
13.16
1.46
86.84
13.16
1.90
94.23 I
5.77 II
2.30 Ii
94.04
5.95
1.12
82.25
16.11
0.79
.;,-
89.85
10.13
1.01
94.05
5.95
1. 22
91.46
8.54
5.04
91.46
8.54
6.01
54.44
45.56
3
2.24
76.21
23.79
6.06
91.46
8.54
8.04
91.46
8.54
80.88
19.12
3.18
90.78
9.22
I•• 08
90.78
9.22
4.86
89.35
10.65
3.21
80.88
19.12
4.01
90.78
9.22
5.22
9l;.78
9.22
6.32
84.56
12.59
1. 70
92.27
7.56
2.] 8
92.44
7.56
2.60
92.44
7.56
2.37
80.36
19.64
1. 79
87.41
12.59
2.31
92.44
7.56
2.77
----
J
_Q.:~~_~9
__
1.09
76.21
23.79
3.93
---Note:
__
94.21
5.70
0.59
==~~.:....-=.:-~
10
-
._..... -.-
I
I
II
9~
_.-
The first entry in each cell is the actual probability of the interval
covering the true. value of <t> anel the second is the probahi li ty that it
will fall completely below ¢. The third entry is the expected length
of the interval.
Tabl e 11
Bias in A'and A with
P
_.~.•-._---_._-
-_·_···--I--~---~--_·_--"---------
Value
A
of
ir'--
To order n-
~-+!
I
r
Bi_~-rS
i~~.
1
uu
To order n-
2
_.. .--.,.....---- .•--~
-;'-·-~f
"-
of bias in .l..
------r-.... _ - - - - _
...~ _ _-
0.5
-0.0239
-0.0223
-0.0020
9.1
1.0
-0.0345
-0.0336
-0.0012
3.5
1.5
-0.0374
-0.0373
-0.0001
0.2
2.0
-0.0359
-0.0365
0.0007
2.0
2.5
-0.0322
-0.0332
0.0012
3.6
3.0
-0.0276
-0.028R
0.0014
4.9
3.5
-0.0229
-0.0241
0.0015
6.2
4.0
-0.0184
-0.0196
0.0015
--.------_._.-
m
I
n ~ 5
..---.._'
. ---- ..-.-.-..... ..._ . Bias in A as'
Bias in A
p
p
a percentage
to order p-2
un
on
•
------_.-.------
J
7.5 __
I
Table
- -12
-
.
Second Order Moments of A and Ap for Selected Values of A with n - 5
A
Value
of A
-
A
V (A) and E (A-A)2 to various orders
se
se
A
Vse(A) and
-2
2
V (~)=E(~ -A)2
se p
p
to order n-2
A
E
V~e(\)
~e
Vse(Aj
(A -A)2
p
1
.
~
• se 6-A)2
E (A-A)2
se
-2
to n
E (A-A)2 to n-1
se
V (A) to n
se.
I 0.1
0.0387
0.0343
0.0343
0.0388
1.132
1.131
1.128
0.3
0.1091
0.1001
0.1004
0.1098
1.097
1.094
1.087
0.5
0.1716
0.1617
0.1623
0.1731
1.034
1.032
1.057
1.0
0.3024
0.2976
0.2988
0.3054
1.013
1.011
1.5
0.4095
0.4118
0.4132
0.4130
1.001
1.000
0.991
2.0
0.5035
0.5117
0.5130
0.5067
0.995
0.994
0.981
2.5
0.5911
0.6034
0.6044
0.5937
0.992
0.991
0.978
3.0
0.6765
0.6909
0.6917
0.6784
0.9:)1
0.990
0.978
3.5
0.7619
0.7772
0.7777
0.7632
0.991
0.991
0.980
4.0
0.8487
0.8638
0.8641
0.8495
0.992
0.991
0.982
A
A
E (A-A)2 to nse
to n
-2
-
I
e
.
e
,
1.012
e
1
I
I
Figure 1
Bias in the three estimates ~t ¢p and ~*
Bias
0.8
Series to
order n.- 1 for ~
0.6
0.4
0.2
10
20
0.0 t---t+-;......-~~::==-
-0.2
-0.4
-0.6
-0.8
-1.0
30
40
50
...._--=~===-~
Sample
Size
Figure 2
The variance of¢, ¢p and ¢*
Variance
2.0
1.6
1.2
..
0.8
A
¢
<p*
0.4
10
20
30
40
50
Sample
Size
Figure 3
Mean squared error of the three estimates
Mean
squared
error
2.4
2.0
1.6
1.2
0.8
0.4
Sampl e
Size
© Copyright 2026 Paperzz