1 111e work of this author was supported by the Air Force Office of Scientific
Research under grant number AFOSR-75-2840. Tile developnent of many of the
computer routines Has also supported by AFOSR-7S-2840.
AMONTE CARLO STUDY OF ROBUST ESTIMATORS OF LOCATION
by
Hayr'1ond J. Carro 11
and
. wardJ . '.'f
t ,egman1
Eel:
Department of Statistics
Univers.ity of North Carolina at Chapel Hill
Instivlte of Statistics ~tiY,~o Series #1040
November 197.5
j
ABSTRACT
Andrews et al (1972) carried out an extensive Monte Carlo study of robust
estiLlators of location.
esti~tes,
Their conclusions were that the hampel and the skipped
as classes, seemed to be preferable to some of the other currently
fashionable estimators.
The present study extends this work to include estima-
tors not previously examined.
1ne estimators are compared over short-tailed as
well as long-tailed alternatives and also over some dependent data generated by
first-order autoregressive schemes.
threefold.
The conclusions of the present study are
First, from our limited study, none of the so-called robust esti-
mators are very robust over short-tailed situations.
necessary in this situation.
Second~
tbre work seems to be
none of the estimators perfonn very well
in dependent data situations, particularly when the correlation is large and
positive.
This seems to be a rather pressing problem.
Finally, for long-
tailed alternatives, the hampel estimators and Hogg-type adaptive versions of
the hampels are the strongest classes.
The adaptive hampels neither tmifonnIy
outperform nor are they outperformed by the hampels.
However, the superiority
in tenns of maximum relative efficiency goes to the adaptive hampels.
the adaptive hampels, tmder their worst
performance~
That is
are superior to the usual
hampels J tmder their lvorst perfonnance.
Key words and phrases:
M-estimators ~ hampel estimators, trirrnned means, onestep I~-estinmtors, adaptive estimation, rallie estimators;
robustness. hbnte Carlo simulation, sensitivity curves,
influence curves, relative efficiency, small sample,
long-tails, short-tails, time series, first-order autoregressive, dependent data.
A.M.S. Classification Numbers:
Primary 62G3S; Secondary 62COS, 62FlO, 62GOS.
1.
Introduction.
In 1972, Andrews, Bickel, Hanpel, I-fuber 1 Rogers and Tukey
published a very extensive and infomative r',bnte Carlo study of robust estirr;ation of location in a sylIunetric probability density,
This study involved some
68 estimators of location as well as 14 distinct smnpling distributions.
sizes used were
51 10~ 20
and 40
tions only the sample size of
althouVl for nost of the sampling situa-
~ms
20
Sample
investigated,
For the estiTIlators and
sampling situ..1.tions exarnined, Andrews et a1 (1972) provide a very complete and
satisfactory picture.
We are motivated 1 hO\'Jever, to supplement this study for three basic
reasons.
In A"'ldrews et <11 (1972)
Robustness Study),
~
l1ereafter referred to as PRS (Prinoeton
short-tailed sampling situations are ruled out of considera-
tion with the statement, HRobustness for short-tailed distributions was thoueht
to be a rather special case, arising in situations that are usually rather
easily recor:nized in practice, n l
Since some of the location estimators "lere
designed to protect against short-tailed alternatives as well as long-tailed
possibilities~
'ie feel that examination of long-tailed alternatives alone
faults such estinators unfairly, rather akin to discovering a two-sided test is
not most powerful against one-sided alternatives.
Even ignoring this
aspect~
the question rer:ains 1 h1'1hat are desirable estimators given short - tailed alternatives?\;
A second rlotivation 'ITas provided by the rather disna.l situation in the
time series
context.
The
sampJ.~
ncan is routinely shovm to be a consistent
estimator of the i'icenterl i of a stcti_onary ti':1e series with the clear implication
1
PRS, p. 67.
2
that a time series is centered by subtracting the sample mea.'1.
mean is unsatisfactory for independent, but non-nonilal
data~
If the sample
how much worse
must it be for correlated data?
Our third notivation arises from a personal conviction that adaptive estimators, properly formulated} ought to be very successful.
In particular, we
observe that because of the contributions of Professors Hampel and I-fuber to the
PRS <;
the so-called hampel and closely related t'f..·estir.1ators were extensively
studied.
At least 2S of the 68 estii1J.ators studied involve either a hampel or
a M-estioator.
The !i-estimators and particularly the harrrpels
perform very
well both because they are innately good estimators and because they lV'ere finettmed..
111e adaptive estimators? however, were not similarly fine tuned and do
not fare as well in the final analysis.
To state our conviction succintly, if
harfpels are good, adaptive harapels should be better.
The paper is divided into six parts.
Section 2 lists the estirnators
studied in this paper and gives a short discussion of each.
Section 3 discusse:::
the details of the Ubnte Carlo aspects of this work while section 4 contains
the tables of results.
selection of the
Section 5 presents stylized se:nsitivity curves for a
estli~tors
and finally sectiml 6 contains our reactions to and
conclusions about the results.
2. The Estimators.
In choosinG which esti..nators to examine in this study? we
were guided by the results of the PRS.
As sta....1.dards of corIparison J we chose
l',1-estimators and the related hampels and the trimr.'led meai'iS.
Against these
standards, we measured various fonns of adaptive estimators, one-step rank
3
estinators and several miscellaneous estimators - including the Hodges-Lehmann,
Johns'
>
Gastwirth' s > Andrews i and an estimator based on skipping.
We follow
the routine established in the PRS by listine the estimators together Witll
their rmemonic codes in the
mators follm'ls the table.
follm~ing
table.
A short description of the esti-
He note here that the codes described beloH agree
with those in the PRS when an estimator is CODfilon to both works.
NUlnber
1
2
'Z.
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
Code
5%
10%
19%
25%
38%
50%
Short Descriptio~
5% symmetrically trimmed mellil
10% symmetrically trirra:J.ed mean
18.75% symmetrically trlimned mean
25% syrllnetrically trir.!raed mean (midr,lean)
37.5% synnnetrically triE1I!led mean
50% symnetrica11y trimmed Llean (median)
M meal}
OM Outer mean, mean of trimr:1ines after 25% symmetrical trirmling
DI0
PIS
One-step Hubcr, k=1.0, start = median
One-step Huber, k=1.5~ start = median
D20 One-step Huber, k=2.0, start = nedian
12A IJ-estimate, I/J bends at 1.2, 3.5, 8.0
17A II-estimate, I/J bends at 1. 7, 3.4, 8.5
21A I.i-estlinate, I/J bends at 2.1;. 4.0> G. 2
22A hI-estimate, 1V bends at 2.2, 3,7, 5.9
25A l',I-estinate)l I/J bends at 2.5, 4,5, 9.5
pnA Adaptive M-estiwate, ¢ bends at ADA, 4.5, 8.0
HGI Hogg-type adaptor using triImned means 38%, 19%,
HG2 Rogg-type adaptor using tri!~ed means 38%, 25%~
HG3 Hogg-type adaptor using trimmed nearlS 38% ~ 10% ~
0
T.J
'
•
•
1 means 3'1!);,
25 '0,
aaal)tor
uSlng
tn.ITnTlcG.
u
HG4 .,ogg-type
HGS Hogg-type adaptor usinB trbili~ed ncans 38%, 25%
HG6 Hom.~-type aeapeo!' using trirnrned means 33%, 19%
1.311\ Hogg-type ad.aptor usi!l~~; har1pels Pw'"'A, 25A, 17A
L ·31?cf.. Iiogg-type adaptor usin::;: hanlpels 25A, 21A; 121\
v
1.90A lIogg-type adaptor using
1.95.1\ Hogg-type adalJtor using
2.00.1\ Hogg-type adaptor using
1.81J Hogg-type adaptor usL'1g
1. 90D Hogg-type adaptor usinZ
har!I'els ADA,
h2..lTI'}cIs !\DA,
hanrj,)els 21A,
[JI-estimators
N-estimators
25A, 17A
25A, 17A
1ZA
DI0, DIS;
DID, DIS,
II" OI'iI
10%
5%
19%
D20
D20
(table con 1 t. next page)
4
(conit.)
Number
Code
31
32
1.60D
§hort Description
Hogg-type adaptor using Ii-estimators DlO ~ D16 t D20
Hogg-type adaptor using l'1-estmators D10; DlS t D20
Hogg-type adaptor USD"lg ill-estimators DI0) DIS, D20
Gistwirth's estll1ator
Andrew's estimator
Hodges-Lehmrom1 estb~ator
Nornml-scores rank estiQator
One-step, norrnal-scores, raNC estll~~to~ 0=50%
One-step, normal-scores, rank estirnator, e=the adaptive estiILatc . .·
One-step~ unifonn-scores r~~~ estL~ator~ 8=50%
One-step, unifonn-scores rank estinator, 8 =the adaptive estirr.a'
Johnis adaptive estimator
I'~..lltiply-skipped. mean, max(5k,2)~.6N deleted
1.95D
33
34
35
1.75D
36
37
38
39
H/L
GAS
AMf
RN
. RNSO
RNA
41
RUSO
RUA
42
Jf)A
43
5T4
40
TABL.E 2.1.
A brief description of the 43
l1'inemonic code for each.
estL~tors
of location together with a short
Trirmned 11eans
It is well-l<nmffi that the usual arithmetic me'OL"l is non-robust in the presence of extrene outliers.
A simple scheme for robustif'ling the mean is to
eliainate "extreme" observations.
cards the
[(N+l)o.]
largest and
The 0.(100)%
[(l~+l)o.]
symmetric trimmed mean dis-
smallest observations from the
sample al1d computes the arithnetic average of the Terr:aining observations.
Here
is the greatest integer ftmction.
[0]
outer mean is sometimes calculated.
observations and L(a)
thOll 01,.1
=
1096~
19%,
01'ffi
25%~
50%
f:10ml
[(H+l)a]
of the
[(H+l)a]
smallest
largest
observations~
In this study 1 the OIr is used as a robust esti-
right as well as in HGI.
38%~
is the
is the moan of the
(U(.25)+L(.25))/2.
mator in its
The outer mean is the mean of the tril':mdn.c"
I'jDre precisely ~ if D(a)
with 0.=.25.
For short-tailed situations, the
TIle
and, of course, with
trim~ed-~eful
0%
triEmlL.'l21
est~tors
II.
are
Adaptive
S9<o >
5
estirl8.tors based on trimmed means include HGI ~ HG2) HG3, HG4, RGS
a.'ld HG6.
Huber IS r:-estimators
h:1-estinators of location are solutions,
where
\jJ is an odd function and s
mated independently
01'
T) of a.."'1 equation of the fonn
is a scale estimate which is either esti-
fron an equation of the fonn
(X. -T)
n
.L X-+-= O.
)=1
Here, of course,
xl? ... ,~ are the observations.
Huber (1964) proposed
\jJ
of the form
\jJ(x;k) = {-~
Ie
Hm~el
(1974) pro90sed using
x<k
-k<x<k
x>k
\jJ as above but with s
estll~ted
by
med Ixi - 50 961/ • 67 S4.
i'Ie refer to this latter quantity as the harrrpel scale estimator.
1118
Ht1ber n-estiI!latoTs exar'lined in this work are one-step estirlators (cf.
Bickel (1975)'. That is, rather than selectLLg T as the exact root of the
equation
n1 (x.-~----'1')
\jJ
i=l
we perfonn one iteration of the
as the starting value.
:: 0)
S
Ne~r,on-:~phson method
UShTJ.g 50% (the median)
Thus the estimator T is given by
6
. -.l~l1
:te. - (50%~
'I) ._1_
_
n-l~~
(X ~t _i
•
n
T = (50%)
+
s
0
e
[.i=l' ,-
s
s
[.1=1
The scale
fI&i~el
s was chosen according to
scale and also the scale
s
07a--J
tVIO
alGorithms.
= 1QR/1.180S.
Res~lts
IIe investigated the
based on
rfurr~)el
scale 'Here similar and generally sonc\'lhat superior to those based on the jJlterquartile range.
section 4.
have
12A~
a~d
2.0
17A"
21A~
respectively.
22A~
25A
ill1d
Closely relatea are the
2.20A are adaptive har.'lpe1s.
is an II-estinate with I/J
D20
which
Han~el-type
ADAe. Also I.BID, 1.90D, 1.95D
1.o0D and 1. 75D are adaptive ll-estinators and
1.95A and
DID ~ DIS ~ and
The Huber estiL1ators discussed inclt1rle
k=l.O, 1.5
estL~tors
lYe Hill rCliort o111y the results based on Hal:lpel scale in
2.00A~. 1.81J-\~
1
1. 81*A, 1.90A,
Finally ANT, the Andrews estit'.1atoT,
chosen by
tP(x) = {~:in(X/2.1)
Ixl<2.17r
othen'lise .
H
1s
.1ampe
TIle har'I;?e1 estm.ators are Huber-type II-estirilators with tP
~iven
by
O~lxl<a
t/J(x)
= sgn
a:-::;Ixl<b
x
0
b~lxl<c
Ixl~c
The parameters,
a, b al1d car;;: riven below together with the code symbol.
7
code
a
l2A
l7A
21A
22A
25A
ADA
1.2
1.7
2.1
2.2
2.5
b
3.5
3.4
4.0
3.7
4.5
AlJA 4.5
c
8.0
8.5
8.2
5.9
9.5
300
The parameter, ADA, is chosen as a function of the estinated tail length of
the distribution which is estimated by
L
=
ave
j {-~:m:n-}
Ix._(50%)I>sllxi -\.5v"6 J !
•
1
Here s is the Hampel scale estimator. The parameter ADA is defined by
l.a
L~.44
ADA = (75L-25)/8 .44<L~.6
{
2.5
.6~L
All of the hampel estimators are one-step IvI-estiraators computed by the algorithm discussed above.
Hampel estimators include
12A~
17A, 21A~ 22A, 25A a.."'1:J
ADA and adaptive Hampel-type estimators I.alA, 1.8l*A, 1.90A, I.9SA and
2000A.
Related are the Huber hi-estimators and the Andrew's estimator already
mentioned.
Hogg-Type Adaptors
Hogg (1974) has suggested an adaptation procedure which chooses among 2
or more estirJators of the center depending on the value of some statistic which
measures the length of the tail.
of tail length.
Hogg first suggested the Kurtosis as a measure
fibre recent suggestions include
_ U(.OS)-L(.OS)
Q1 - U(.50)-L(.50)
8
where
U and L are defined as in the section on trimmed means.
The exact
formulation of the various adaptors is given below.
Code
HG1
Formulation
38%
T = 19%
Iv!
OH
HGZ
HG3
ra%
T = 25%
10%
T
= r%
10%
5%
HG4
HG5
HG6
T
= ra%
25%
Ql>3.2
Z.6<Ql::;3.2
2.6<Q1::;2.6
Q1::;Z.0
Q2>1.87
1. 81<QZ::;1.87
Q2~1.81
Q2>1. 87
1. 81<Q2::;1.87
Q2~1.81
QZ>l. 80
1. 55<Qi;l. 80
19%
Q2~1.55
T = {38%
25%
QZ>Z.20
T = {38%
19%
QZ>2.20
QZ::;2.20
QZ~Z,ZO
•
Using Q as a measure of tail length, several adaptive Hampel estimators were
Z
also constructed.
o
1.81A
T
=
{
tillJ.\
1.81<QZ~lo87
Z5A
17A
Q2::;1. 81
QZ>1.87
9
1.81*A
fM
1.81<QZ:S;1.87
QZ:s;I.81
QZ>l. 37
r={:
1.90<QZ:s;2.0S
QZS1.90
QZ>Z,OS
r={:
1. 95<QZ:S;2 .10
Q2:S;1. 95
QZ>Z.10
T
= ZSA
lZA
1.90A
17A
1.95A
171.
2.00A
T
= J21A
llZA
Q,,:s;Z.OO
'"
QZ>Z.OO •
Also included in this study are Hogg-type adaptors using Huber M-estimators.
1.81D
DI0
T = DIS
{
DZO
Qz>1. 87
1. 81 <QZ:S;1. 87
QZ:S;I.81
DIO
1.90D
T
DIS
D20
QZ>2.0S
1.90<QZsZ,OS
QZ:S;1.90
1.9SD
DIO
T = DIS
{
D20
QZ>Z,lO
1. 9S<QZ:S;Z, 10
QZ:s;1.95
l,60D
DIO
T = DIS
{D20
QZ>l. 70
1.60<QZSl.70
Q2:S;1.60
1.7SD
DIO
T = D15
{
QZ>1. 75
1. 70 <QZ:S;1. 75
QZ~a, 70 •
=
{
D20
10
Rank Estimators
In what follows, let Ri(e)
denote the signed
r~~ks
of xi -8. That is,
according to magnitude (but not sign) and
we rank xl -8, xZ-8, ... ,~-8
Ri(e) is the product of sgn(X i -8) and raruz of (xi-B).
J(u) = q,-l(l;U), where lI> is some distribution function.
taken to be normal or uniform.
as the root,
Further let us define
Typically,
~ is
The rank statistic with normal scores is taken
T, of the equation
J*(T)-3
=0
where
and
n
3
= n- l L
J(i/n+l).
i=l
The symbol,
L*,
indicates the summation over all positive ranks 0 Roots were
fmmd by the method of bisection using a maximum of twelve iterations 0 If
~
is taken as uniform, the resulting rank statistic is asymptotically equivalent
to the Hodges- Lelullann estimator - namely the median of all pairwise means of
the sample (cof. Hodges-Lehmarn (1963)).
This proved to be a very close approx-
imation so that in this study we report only the Hodges-Lehmann estimator and
not the u11iform-scores rank estimator.
Just as with the Huber-ty:::,c I+estimators, one may calculate one-step rank
estimators based on one iteration of a Newton-Raphson method (Kraft and
Eeden(1969))0
becomes
Va~
In our previous notation, the one-step rank estimator, T,
11
e
is chosen as an initial robust estimator of location and B* is an estimator of B = J'(F(x))f 2(x)dx, where, of course F and f are respectively
f
the distribution and density of the sample.
In this study, we employed a nonparaJl1etric est:hnate of B proposed by
Schweder (1974)
B* =
where
x(l)~
n
I
. . 1
1,J=
. -x (J). )
J'(i/n)£-lK (X (1)
n
£
n
.•• ~x(n) are the ordered observations.
This
estin~te,
B*, is
based on the standard kernel estimator of a probability density function (c.f.
Parzen, (1962)) with uniform kernel,
K.
lve chose £20 = 1.60, although T seems relatively insensitive to the
choice of £.
tmifom on
In the study of one-step rank estimators, we chose
(-1,1), nonnal, logistic, exponential 8.'id triangular.
<p
to be
The logistic-
scores, exponential-scores and the triangular-scores estbnators are not reported
since they were typically bettered by one of the others.
e
was chosen according to two schemes.
median.
The initial estimator
The simpler choice was 8=50%, the
The more complex choice was an adaptive estimator given by
e = {21A
50%
Q2~2.25
Q2>2.25.
EstiJ11.ators based on ranks include E/L, Rl'J, RHSO, RNA, RUSO and RUA.
Other
est~tors
Johns (1974) provides an adaptive estimator which can be NTitten as a
linear combination of two trinnned means
12
where
1 r+s
T1 = IS . I 1 [x(j) +x Cn - j +l )]
J=r+
and
T2 --
with cl
and
C
1
n-2r-2s
n!z
[
j=r+s+l
x(j)+x(n_j+l) ]
detennined from the sample by
z
and
cI
ex:
(zs(n+"(+zs)))
1
if
1
(2r+s) 'n-2r
2 n-2r-2s
- DIDZ n-2r
e
(n-2r-Zs))
4 (S)
n-2r
- DIDZ n-2r .
2
are proportional to the lengths of the segments of the sample entering
c
z
ex:
1
11
TIle Di
in the Ti . Values of r and s for n==20 were r==l and s=4. The program to compute this estimator was adapted directly from PRS. For further
details sec Johns (1974).
Jolms i estimator is coded as
J~H.
A skipping procedure is used to iacntify and delete extreme points.
For
n=20, let
t l = hl-I.S(hZ-hl ) and t z = h2+l.5(hz-hl ). A singly skipped estimate deletes
all observatioIls smaller than t l and larger than t Z . If a single skipping
deletes a total of k points, the estLmator 5T4 deletes a further
~
points
13
from each end where
II •
(
(12k) , . 6n~k)",
iV-mlnmax,
2 .,
and the mean of the remaining sample is used as the estimator.
was fomulated in the
This estimator
PRS.
Finally, we included
all
estimator proposed by Gastwirth (1966), defined by
T = .3x
+
.4(50%)
+
([rID
3.
Monte
Ca~lo
Details.
investigated in the
PRS,
In addition to the long-tailed sampling situations
we also investigated short-tailed situations and
situations involving dependent data.
Pseudo-random numbers were generated by
the familiar multiplicative congruential generator.
tion and cautions in the use of these generators).
o
(See the
The
PRS
output~
for descrip-
scaled between
and 1, fonn the basis for all subsequent calculations siIlCe they approxi-
Inate a uniformly distributed sample.
Nonnal pseudo-random deviates were gener2
ated according to the well-known Box-Muller transfonn and Cauchy pseudo-raTldom
deviates were generated by the arctar.gent transform.
These basic three dis-
tributions, or mixtures thereof, form the basis of all results reported in
this work.
The basic sampling situations m:e summarized in Table 3.1.
2 This transfonn is widely attributed to Box fuld Kuller \<,ho published their
work in 1959. The result is ~ however, pointed out by Paley and Wiener in
their 1934 book (pages 146-147).
14
Table 3.1.
Sail1pling situations for independent observations. All had sample size 20.
H/U represents a variate fanned by dividing a Normal
(O~l)
variate by a
variate unifonn on the interval
Distributions
Unifom, mean 0, variance 1
97% Unifonn (0~1) + 3% normal (0,1) contamination
90% Unifona (0,1) + 10% Nonnal (0,1) contamination
50% Unifonn (0,1) + 50% Normal (0,1) contamination
Normal, mean 0, variance 1
99% Nonnal (0,1) + 1% Nonna1 (0,9) contamination
97.5% Normal (0,1) + 2.5% Normal (0,9) contamination
95% Normal (0,1) + 5% Normal (0,9) contamination
90% Normal (0,1) + 10% Normal (0,9) contamination
85% Normal (0,1) + 15% Normal (0,9) contamination
75% Norra1 (0,1) + 25% Normal (0,9) contamination
95% Norrr~l (0,1) + 5% Normal (0,100) contamination
90% No~~l (0,1) + 10% Normal (0,100) contamination
75% Normal (0~1) + 25% ~Jorrr~l (0,100) contamination
97% Nonna1 (0,1) + 3% Cauchy contamination
90% Normal (0,1) + 10% Cauchy contard.nation
50% Nom.al (0,1) + 50% Cauchy contamination
Cauchy~ median =
90% Normal (0,1) + 10% N/U
75% Normal (0,1) + 25% N/U
°
Tao
~
u
U + .03N(0)1)
U + .10N(0,l)
U + .50N(0,1)
N(O,l) = N
N + .01N(0,9)
N + .025H(0,9)
N + .05N(0,9)
N + .10N(O,9)
N + .15N(0,9)
N + .2SN(O,9)
N + .0SN(0~100)
N + .10N(0,100)
N + .2SN(0,100)
N + .03C
N + .10C
N + .SOC
C
N + .10N/U
N + .25N/U
In addition to the sets of independent observations discussed above, \ve
generated observations according to a first order autoregressive scheme,
x· =
J
px.
J-
1
+
e.,
J
j
= 1,2, ..• ,20. The shock, e., was cl10sen according to
J
either N, U or C and the correlation coefficient,
or
.9.
The ii1.itia1 value,
p, as eitller
.2, .5
xo' \Vas chosen as O. All of the 43 estimators
discussed in section 2 wer:: investigated in the 20 independent sampling situations.
Only the non-adaptive estimators were studied in the 9 time-series
sampling schemes.
15
The results we report a.re based on 1000
~.'bnte
Carlo replications.
The
sample variances of the estimators were calculated and then scaled by a factor
of 20 (the sample size) to make the results comparable to those in the PRS.
Asstmling approximate nonnality for the estimators ~ the variance of 20x sa.9ple
variance would be about
mator ~
T.
.79920r4 where aT2 is the true variance of the esti-
For exa,'nple if T = X', then
tines the sample variance is
.001998.
ai
is
in-
and the variance of 20
Thus one may expect about one decimal
place of accuracy with correspondingly less significance as the value
climbs.
ai
We report two decimal places, in most cases because all estimators
j
were calculated over the same samples. Without attempting inferences outside
these
samples, the extra deciITal place(s) are meaningful.
lye also report pseudo-variances as discussed in the PRS.
distributions~
the variance is essentially determined by a few observations in
the extreme tails.
long-tailed~
For long-tailed
Thus if the distribution of the estirr.ator ~
T? is itself
the sample variance of T would itself not be a robust estimate
of the scale of T.
As in the PRS ~ we compute the 2.5%- He point ~ square this
quantity and divide by the normalizing constant
(1.96) 2 . This pseudo-varianc(
is further rescaled by multiplying by 20 (the sample size) to yield qUffiltities
comparable to those in the PRS.
4.
The Results:
3 tables.
The main results of the Monte Carlo study are su.rm1arized in
Table 4.1 is a table of sample variances (over the 1000
~bnte
Carlo
replications) for the 20 independent samplil1g schemes and 43 esti.'11ators of
location.
Table 4.2 is a table of pseudo-variances (based on the 2.5%-ile
16
point) for the same 20 independent saIl:?ling schemes and 43 estimators.
Table 4.3 is a table of biases and variances for the 9 sampling schemes
involving first order autoregressive variates.
None of the Hogg-type adaptive
estimators are listed for the two-fold reason that the choice of Q and QZ
1
is predicated on independent observations and that the basic estimators tr:L'11fi1ed means, haupels? etc. - perform poorly. Some preliminary calculations l~"'
dicate that
Hom~-typeadaptorsperform
similarly illlder the time series alterna-
tives.
We withhold discussion of these results to section 6.
17
.--.
~
0
0::
w
~
z:
1
Z
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
21
·28
29
30
31
32
33
:54
3S
36
37
:58
39
40
41
42
43
~
.....
z
M
S
::::I
.....
.
~
0
0
~
z
+0
~
z
+0
+0
::::I ....
::::I'"
::::I
.....
.-.
~
.....
Z
0
-
~
0
~
0-
a
~
Z
l.Cl
+0
Z
z
N
-
-;;:;-.
.
a
z:
LO
.--.
0-
0-
a
0
z
z:
0
+ ....
l.Cl
+ ....
+0
Z
+0
:z
z:
z
1.03
1.05
1.09
1.18
1. 32
1.11
1.13
1.16
1. 23
1.36
1. 25
1.24
1.26
1.35
1.50
1.42
1.32
1.31
1.43
1.56
1.63
1. 51
1.49
1. 56
1.66
.84
1.47
1.27
1. 08
1.11
1.14
1.23
1 1 . 36
1.55
1.05
1. 25
1.16
1.07
1.49
1.04
1.31
1.15
1.06
1. 54
1. 24
1. 87
1.25
1.19
1.67
1.45
2.27
1.38
1.. 30
1.72
1.82
3.37
1.46
1.40
1.84
2.09
4.03
1.60
1. 51
1.29
1.85
1.46
1.26
1.25
1.11
1.42
1. 22
1.11
1.13
1.04
1.14
1.06
1.05
1.08
1.01
1.13
1.05
1.01
1.03
1.15
1.26
1.18
1.16
1.20
1.25
1.36
1.27
1.24
1.26
1.38
1.46
1.39
1.38
1. 39
1.49
1.59
1. 50
1.49
1.52
1.17
1.31
.80
1.37
1.Z0
1.14
1.28
.88
1.41
1.t5
1.04
1.11
.96
1.19
1.08
1.03
1.06
1.11
1.16
1.12
.98
1.02
1.06
LIZ
1.09
1.13
1.17
1.19
1. 21
1.17
1.23
1.28
1.28
1.31
1.28
1.38
1.43
1.41
1.37
1.35
1.51
1. 55
1.60
1.58
1.57
1. alA
1.81*1'.
1.77
1.82
1.50
1.10
1.11
1.72
1.80
1.49
1.20
1.21
1.78
1.83
1. 53
1.17
1.17
1.25
1.23
1.14
1.05
1.05
1.22
1.18
1.09
1.00
1.01
1.28
1.23
1.16
1.16
1.18
1.40
1. 35
1.26
1.26
1.26
1.47
1.43
1.31'
lAG
1.41
1.60
1.56
1.49
1.52
1.54
l.90A
I.9SA
2.00A
1.810
1.900
1.09
1.08
1.20
1.25
1.24
1.18
1.18
1.30
1.34
1.32
1.15
1.14
1.26
1.31
1.29
1.40
1.41
1.22
1.07
.1.08
1.05
1.04
1.11
1.14
1.12
1.03
1.03
LOS
1.04
1.04
.99
.98
1.02
1.04
1.02
1.15
1.14
1.16
1.18
1.15
1.25
1.24
1.24
1.27
1.25
1.40
1. 39
1.39
1.40
1.38
1.51
l.52
1.50
1.53
1.49
1.95D
1.60D
1. 75D
GAS
AMI'
1.23
1.49
1.34
1.90
1.22
1.04
1.10
1.08
1.28
1.12
1.02
1.12
1.08
1.19
1.04
LIS
1.24
1.22
1.25
1.16
1.25
1.34
1.32
1.37
1..25
1.39
1.46
1.45
1.40
1.32
1.50
1.56
1.56
1. 59
1.50
1.21,.
.89
1.41
1.01
2.47
1.29
1.54
1.41
1.93
1.Z7
·1.25
.97
l.4S
1.08
2.50
1.11
1.30
1.22
1.44
1.12
HlL
1.32
1. 58
1.44
1.96
1.31
-[;"28
.94
1.09
.94
1.18
1.01"
1.82 .
1.03
1.01
1.07
1.02
1.40
1.01
1.15
.9&,
1.49
1.14
1.14
1.22
1.13
1.54
1.26
1.26
1.34
1.23
1.67
1.38
l.4S
1.46
1.38
1.70
1.51
1.62
1.58
1.50
1.84
1.26
.93
1.27
1.11
.95
1.10
LOS
1.06
1.08
1.01
1.02
LOS
1.i4
1.16
1.25
1.20
1. 20-,'" 1.29
1.38
1.47
L47
1.49
1.59
1.57
5\
10\
19%
25\
38\
1.18
1.34
1.50
'1.82
2.30
1.15
1.32
1.49
1.80
2.08
1. 21
1.37
1.53
1.83
2.41
1. 02
1.13
1. 22
1.41
1.61
50\
M
Ofol
010
DI5
2.48
1.01
.61
1.95
1.55
2.43
.98
.60
2.10
1.66
2.51
LOS
.67
1.97
1.58
1.82
DZO
12A
17A
2lA
22A
1.23
1.82
1.41
1.20
1.17
1.32
1.97
1.52
1.30
1. 29
25A
1.08
1.19
.80
1.39
1.Z1
ADA
HG1
HG2
HG3
HG4
HG5
HG6
RN
RN50
RNA
RU5G
RlJA
JtlH
5T4
1.19
.88
1.22
1.4~
1.09
2.60
1.30
.88
1.28
.94
.98
VARIANCES
Table 4.1:
Variances of 43 estimators of location besed on 1000 ~'bnte <:ado replications.
,
".\
]8
-
cr::
w
co
~
CI
~
~
-
0
Z
U
Z
-z
z
+:g
z:
0
0
0
z
Lt"l
+N
.....
0
0
0
LU
-8
o
o
O'l
0
+ .....
z:
u
<"'l
+0
z
Lt"l
+N
z
z
u
+ .....
0
z:
.
0
+Lt"l
z
u
1.18
1.24
1.30
1.30
1.28
1.89
1.74
1.88
1.84
1.87
3.93
3.01
3.30
2.83
2.91
1.18
1.23
1.31
1.30
1.30
1.53
1.57
1.69
1. 66
1.66
1.28
1.26
1.18
1.13
1.15
1.37
1.31
1.24
1.21
1.24
1.83
1.79
1.89
1.73
1.69
2.78
3.01
3.68
3.26
2.84
1.35
1.31
1.25
1.19
1.22
1. 72
1.69
1.65
1.52
1.54
2.86
3.01
.2.77
2.88
3.06
1.12
1.12
1.15
1.17
LIS
1.21
1.20
1.23
1.26
1.24
1. 75
1.74
1.74
1. 78
1.80
3.30
3.31
2.98
3.16
3.31
1.20
1.20
1.21
1.26
1.24
1.52
1.!i2
1.49
1.63
1.62
1.56
1.58
1.57
1.60
1.36
3.21
2.77
2.83
2.67
2.99
1.14
1.24
1.21
1.28
1.14
1.24
1.31
1.29
1.34
1.21
1.82
1.76
1.77
1.78
1. 73
3.44
3.11
3.11
3.19
3.44
1. 24
1.31
1.28
1.35
1.19
1.62
1.67
1.65
1.72
1.52
1.22
1.28
1.31
1.17
1.57
1.65
1.95
1.76
1.48
1.94
4.04
5.93
2.90
3.33
2.56
1.14
1.12
1.21
1.13
1.55
1.24
1.28
1.32
1.23
1.59
2.01
2.46
2.05
1.96
2.06
4.57
6.76
3.41
3.84
2.83
1.25
1.31
1.32
1.20
1.61
1. 73
1.93
1.81
1.67
2.00
1.17
1.24
1.24
1. 39
1.52
1.42
3.13
4.38
3.73
1.16
1.16
1.20
1.23
1.27
1.25
1.91
1.73
1.66
1.41
3.25
3.00
1.20
1.28
1.26
1. 58
1.62
1.54
1.10
1.13
1.17
1.25
1.37
1.40
1.22
1.23
1.30
1.43
8.53
2.70
1. 96
1. 74
1.84
27.2
8.34
4.99
3.24
2.77
50%
DI5
2.08
2.96
5.80
1.86
1.87
1.58
5.45
17.3
1.32
1.23
1.94
11.5
38.1
1.59
1.54
2.56
26.0
85.3
2.68
3.12
1.55
72.9
289.
1.27
1.19
1.59
381.
.***
1.33
1.25
2.06
***
***
1.76
1.82
2.83
***
. ***
3.10
3.63
D20
12A
17A
21A
22A
1.95
1.87
1.89
1.96
2.01
1.20
1.27
1.18
1.16
1.18
1.59
1.44
1.37
1.36
1;36
3.80
2.48
2.82
3.11
2.97
1.14
1.26
1.17
1.14
1.17
1.23
1.:lJ
1.22
1.21
1.24
16
17
18
19
20
25A
ADA
HG1
HGZ
2.07
1.98
2.14
1.93
2.02
1.15
1.19
1.28
1.29
1.28
1.39
1.42
1. 74
1.66
1.66
3.60
2.67
4.26
2.80
3.00
1.11
1.15
1.16
1.20
1.16
21
22
23
24
25
HG4
HG5
HG6
1.81A
1. 81*A
1.88
1.84
1.89
1. 99
1.97
1.34
1.29
1.21
1.16
1.19
1.69
1.59
1.60
1.37
1.42
2.60
2.78
4.73
2.90
2.59
26
27
28
29
30
1.90A
1.95A
2.00A
1.81D
1.90D
Z.Ol
2.01
1.94
1.91
1.94
1.17
1.16
1.18
1.24
1.23
1.38
1.38
1.42
1.56
1.56
31
32
33
1.95D
1.600
1.75D
G4.S
AMI'
1.95
1.87
1.89
1.87
1.97
1.21
1.30
1.28
1.34
1.16
HlL
1.97
2.24
1.97
2.01
2.08
1.95
2.19
1.98
11
12
13
14
15
34
35
36
37
38
39
40
41
42
, 43
HG3
RN
RN50
RL'lA
RUSO
RUA
Jj'lH
5T4
1.67
1.58
1.51
1.50
1.51
VARIANCES
Table 4.1. (continued):
.
1.96
1.68
1.72
1.73
1.78
16.0
9.12
5.32
2.78
2.49
DlO
N
2.00
***
***
1. 70
1.64
4.35
2.16
1.61
1. 57
1.72
(J>f
+
:z:
1.61
121.
475.
1.34
1.26
·1.23
4.37
1.29
2.76
1.21
3.21
1.19
3.58.
1.21
3.46
1.68
1.22
1. 21
1.29
1.41
M
+ .....
z:
......
z:
I.t'I
3.67
1.80
1.64
1.67
1. 79
2.34
2.02
1.89
1.84
1. 90
6
7
8
9
10
0
1.60
1. 23
1.24
1. 31
1.43
5%
10\
19\
25\
38%
1
2
3
4
5
::>
::>
......
z:
u
Variances of 43 estimators of location based on 1000 Monte Carlo
replications.
19
.
~
~
0
0:::
LLJ
CO
0
z
z:
~
::E:
:::l
Z
~
0
M
0
:::>
U
+0
:::l
+~
:::l
0
z
0
+o.n
:::>
.
-0
z:
0
0
0
~
~
..z:
+0
Z
.
m
0)1
0
0
~
0-
~
Z
Z
..-
~
z
z:
U")
N
+0
0
U"")
+0
Z
+~
Z
~
0\
-0
z:
Lt')
+-
~
1.38
1.69
2.05
1.19
1.31
1.48
1.89
2.25
1.18
1.25
1.36
1.40
1.54
.99
1.02
1.00
1. 09
1.19
1.06
1.07
1.13
1. 25
1.41
1.20
1. 20
1.31
1.33
1. 53
1.19
1.12
1.16
1.32
1.30
1.5]
1.46
1.40
1.45
1.42
2.40
.94
.63
1. 94
1.56
2.36
.96
.65
2.10
1.63
2.67
1.02
.73
2.07
1.68
1.86
1. 06
.89
1.43
1.27
1.52
.98
1. 22
1.12
1.09
1.58
1.07
1.23
.92
.96
1.51
1.24
2.09
1.34
1.27
1.63
1.31
2.19
1.27
1.17
1.73
2.13
4.25
1.43
1.45
3.85
1.60
1.46
1.26
1.82
1.41
1.26
1. 22
1.36
1.82
1.51
1.28
1.28
1. 28
1.96
1.5S
1.25
1.22
1.20
1.42
1.28
1.21
1.21
1.07
1.17
1.07
1.06
1.07
.94
.94
1.01
1.2Q
1.33
1.27
1.24
1.32
1.17
·1.24
1.18
1.17
1.35
1.50
1.42
1.37
1.37
1.48
1.66
1.43
1.38
1.41
HGl
HG2
HG3
1.15
1.20
.83
1. 33
1.23
1.18
1.,30
.84
1. 35
1.12
1.12
1.46
.90
1.47
1.24
1.20
1.19
1.06
1.36
1.20
1. 04
1.05
1.00
1.04
.98
.95
.97
1.05
1.15
1.18
1.19
1.28
1.25
1.24
1.18
1.42
1.46
1.49
1.40
1.41
1.44
1.5S
J. 81
1.67
1.67
22
23
24
25
HG4
HGS
HG6
1.8lA
1. 81*A
1.82
1.83
1.55
1.18
1.15
1.67
1.68
1.38
1.20
1.20
J....84
1.89
1.48
1.12
1.12
1.40
1.40
1.36
1.19
1.20
1.14
1.09
1.00
1.06
1.07
1.35
1.Z5
1.13
.96
.96
1.17
1.25
1.26
1.32
1. 25
1.36
1.33
1.31
1.25
1.25
1.35
1. 32
1.16
1.18
1.19
1.30
1.45
1.39
1.44
1.48
1. 73
1.49
1. 38
26
27
28
29
30
1.90A
1. 95A
2.00A
1.81D
1.90D
1.15
1.15
1.26
1. 52
1.26
1.19
1.18
1.28
1. 57
1.36
1:12
1.12
1.25
1.61
1.28
1.20
1.20
1.21
1.39
1.20
1.04
1.04
1. 06
1.11
1.07
.95
.96
.94
.97
.94
1.25
1.20
1.24
1.35
1.20
1.22
1.19
1.18
1.22
1.18
1.44
1.43
1.37
1.41
1.40
1.51
1.47
1.41
1. 55
1.48
31
32
33
34
35
1.95D
1.60D
1. 75D
1.26
1.52
1.41
1.97
1.25
1.36
1.57
1.42
1.98
1.35
1.28
1.61
1.46
2.03
1.27
1.20
1.39
1.26
1.40
1.21
1.07
1.11
1.10
1.02
.93
.94
.97
.92
1.13
.99
1.20
1.35
1.29
1.33
1.24
1.17
1.22
1.19
1.35
1.20
1.43
1.41
1.40
1.48
1.41
1.49
1.55.
1.59
1.62
1.35
1.31
1.00
1.41
1.14
2.39
1. 36
1.09
1.62
1.14
2.90
1.24
1.02
1. 57
1.13
2.67
1.19
1.04
1.23
1.13
1.86
1.00
1.00
1.05
1.05
1.20
.90
.92
.91
.93
1.16
1.18
1.20
1.35
1.22
1.51
1.11
1.20
1.30
1.17
1.63
1. 37
1.49
1.47
1.36
1. 75
1. S3
1.59
1.72
1.48
2.04
1. 25
.91
1.21
1.28
.89
1.40
1.25
1. 02
1.24
1.21
1.08
1.20
1. 06
.99
1.10
.94
1.03
.98
1.24
1.34
1.23
1.18
1.14
1.19
1. 37
1.55
1.47
1.73
1
2
3
4
5
5\
10%
19%
25%
38%
1.13
1.33
1.55
1.83
2.23
1.09
6
7
8
9
10
50%
M
11
12
13
14
15
16
17
18
19
20
21
36
37
38
39
40
41
42
43
a-t
010
DIS
D20
lZA
17A
2lA
22A
25A
ADA
GAS
AMI
H/L
R~
R"l50
RNA
RU50
RUA
J0H
5T4
~
1.27
.94
.98
~.13
1.00
1.43
1.39
1.49
1. 75
2.04
2.26
1.~l
1.53
L38
1.62
PSEUDO-VARIANCES
Table 4.2.:
Pseudo-variances of 43 est~tors of location based on the 2.S%-ile
1000 Monte Carlo replications.
mld
20
-
i·
0
0
0'1
-
W
-
-
-
-~
0
0
u
M
+0
:z
u
+ .-
z
u
0
-~
:z:
z
+0
:z:
0
+ .:z:
5%
10%
19%
25%
38%
2.S5
2.28
1. 98
1.88
1.89
1.82
1.35
].31
1.36
1.48
4.62
2.34
1.91
1.59
1.82
16.2
10.5
5.60
2.35
2.09
1.24
1.33
1.41
1.36
1.45
1.59
1.25
1.31
1. 31
1.39
7.28
2.97
2.28
1. 92
1.94
27.8
9.14
5.02
3.49
2.66
1.28
1.18
1.22
1.28
1.33
3.56
1.95
1.72
1. 83
1.94
50%
1.61
6.85
24.7
1.38
1.29
2.08
]2.5
41.6
1.63
1.73
2.29
29.0
93.9
2.12
2.63
1. 56
1. 85
3.06
1.42
1.44
1.60
9.01
29.5
1.34
1.35
2.16
159.
628.
1. 98
2.28
2.59
680.
***
3.28
3.74
1.42
22.0
81.0
1.27
1.21
1.72
65.0
264.
1.90
1.84
~
W
0
0
~
z:
1
2
3
4
5
z
....
'.
.-
0
0
0::
0
0
0
0
.-
u
:z:
Ll"l
Ll"l
+N
Ln
+N
:z
0
z
+Ll"l
z
u
2;
0
+.z:
+11'
z;~
6
7
8
9
10
ON
D15
2.12
3.15
6.06
1.86
1.92
11
12
13
14
15
D20
lZA
17A
21A
2ZA
1.96
1.96
1.96
1. 92
2.11
1.31
1.31
1.30
1.27
1.30
1.66
1.48
1.46
1.39
1.44
3.43
1.82
2.22
2.47
2.32
1.30
1.38
1.33
1.21
1.17
1.30
1. 36
1.26
1.25
1.26
2.48
1.59
1.72
1.77
1.68
4.42
2.54
3.11
3.36
3.06
1.21
1.32
1.19
1.1Z
1.16
1. 78
1.85
1.80
1.89
1.80
16
17
18
19
20
2.09
1. 98
2.12
1.93
2.25
3.03
2.00
2.98
2.12
2.12
1.26
1.26
1.32
1.31
1.31
1.20
1.23
1.27
1. 25
1.26
1.85
1.77
1.99
1. 94
1. 98
3.53
2.57
3.02
2.59
2.77
1.09
1.17
1. 25
1.33
1.33
1.86
1. 87
1.88
1.98
1. 95
1. 96
1.31
1. 26
1.52
1.43
1.48
.1.48
1.36
1.31
1. 27
1.31
1.47
1.43
1.86
1.83
1.84
21
22
23
24
25
2SA
ADA
HG1
HG2
HG3
HG4
HG5
HG6
1.81A
1. 81 *A
1.82
1.66
1.82
1. 50
1.49
2.09
2.33
4.86
2.21
1.83
1.42
1.36
1.41
1.20
1.22
1.94
2.07
2.32
1.72
1.59
2.62
3.08
3.66
3.11
2.69
1. 32
1.29
1.22
1.12
1.24
1.90
1. 83.
1. 76
1.76
1.82 .
26
27
28
29
30
1.90A
1.95A
2.00A
1.81D
1.90D
2.01
1.98
'1. 94
1.82
1.97
1.31
1.31
1.22
1.38
1.34
1.46
1.41
1.48
1.72
1.72
2.07
2.14
2.00
2.13
2.48
1. 28
1.28
1.21
1.36
1.37
1. 35
1.28
1. 27
1.25
1.34
1.22
1.22
1.25
1.35
1.30
1.75
1.71
1.83
1.98
2.03
3.27
3.27
2.82
3.28
3.38
1.17
1.14
1.18
1.22
1.21
1.77
1.75
1.82
1.86
31
32
33
34
35
1.95D
1.60D
1. 75D
GAS
A.\fi'
1. 96
1.82
1.82
2.01
2.01
1.31
1.38
1.36
1.42
1.27
1.72
1.72
1.72
1.69
1.41
2.56
2.13
2.13
2.23
2.43
1.37
1.36
1.35
1.42
1.22
1. 31
1.35
1.30
1. 32
1. 24
2.28
1.98
1.98
1. 95
1.88
3.51
3.28
3.28
3.31
2.96
1.21
1.22
1.22
1.25
1.12
1.7
1. 78
1.86
36
37
38
39
40
H/L
2.05
2.40
2.17
2.20
2.12
1. 31
1. 38
1.45
1.30·
1.61
1.66
2.12
1.67
1.55
2.07
3.51
5.50
2.53
2.84
2.29
1.28
1.24
1.36
1.18
1.56
1. 28
1.28
1.25
1.24
1.60
2.41
3.06
2.21
2.31
2.16
4.64
7.07
3.38
3.79
2.59
1.19
1.27
1.23
1.15
1.42
1.89
2.01
1.96
1. 78
2.27
1.99
2.27
1. 92
1. 29
1.43
1.37
1. 38
1.57
1.48
2.47
2.46
2.59
1.21
1.31
1.22
1.28
1.24
1.25
1.99
2.03
1.92
3.36
3.15
. 2.70
1.14
1.21
1.28
1.80
1.82
"
41
42
43
~f
mo
R"l
RN50
RNA.
RUSO
RUA
J~
5T4
1.g~
1. 81
1.87
1.81
8
1.~
1.94
1.91
1.9~
PSEUDO-VARIANCES
Table 4.2. (conti.nued):
Pseudo-variances of 43 estimators of location based on the 2. 5~oi-ile
and 1000
~bnte
Carlo replications.
1
._""-~_"""'--:.-C-:-"'
..
:-~"'T---
--..,-...,--...
_ ...•
._
._---..-----
..
--- --"
''\
DISTRIBUTION
NORI1Al
NORto\AL
NORI-1AL
CAUCHY
CAUCHY
CAUCHY
UNIFORM
UNIFORM
UHIFORM
p
.2
.5
.9
.2
.5
.9
.2
.5
.9
w
z:
\..LJ
U
NUMi3ER
CODE
V1
<C
.....
co
.~8\
-.01
-.01
-.01
-.00
.00
-.no
5%
10%
19%
25%
1
2
3
"5
6
50%
7
~I
B
(1)l
9
10
DID
DIS
11
D20
12A
17A
21A
22A
2SA
12
13
14
15
16
17
35
ADA
GA..t;
AMr
36
H/L
34
RNA
39
41
42
43
Table ~:
RUA
...
JV1H
ST4
- .01
-.01
.00
.00
-.01
.00
·.01
-.01
-.01
-.01
.00
.00
-.01
-.00
-. 01
-.01
- .00
-. 01
<C
.....
a::
<C
:::-
1. 58
W
U
U
z:
~
.....
co
<C
.....
a::
<C
:::-
.02
3.61
1.63
.02
3.62
.02 3.66
1.66
1.74
.02 3.77
.02 3.92
1. 85
2.m
.02 4.18
.02 3.59
1. 57
.02 3.91
1. 79
1. 78
.02
3.82
.02 3.70
1.68
1.62
.02
3.64
1.75 " .02 3.81
.02 3.68
1.66
1.62
.02
3.64
1.64
.02 3.68
1. 59
.02 ' 3.62
1.64
.02 3.68
.01 3.80
1.80
1.62
.02 3.64
1. 61
.01
3.64
1. 59
.02 3.61
1.62
3.64
.02
1.63
.02 3.74
1.66
.02 3.70
~
.....
a:l
.03
.02
.02
.02
.02
.02
.03
.04
.02
.02
.02
.02
.02
.02
.02
.02
.02
.02
.02
.02
.02
.02
;03
.02
....a::
e:t
~
.....
~
61.5
62.0
62.6
63.6
64.3
65.3
61.0
60.1
64.0
63.0
62.2
63.9
62.9
62.4
62.6
61.9
62.4
64.1
62.5
62.2
61.9
62.4
61.1
62.3
IX)
.05
.02
.02
.00
.01
.00
.24
.49
-.01
-.01
.01
.00
.00
.00
.00
.00
.00
-.01
.00
-. 02
-.01
.00
-.00
- .01
~
~
.....
...
~
>
a:l
241.
50.0
24.0
11. 6
8.94
8.19
3742.
"Ie"
10.3
12.9
15.8
7.:n
8.69
9.41
9.08
10.7
7.95
11.1
9.01
18.8
11.8
9.40
10.3
8.00
.51
.25
.11
-.03
.02
.01
.98
1.98
.01 '
:l
a::
~
...~co
""*
-35.5
-34.3
-33.1
-31. 3
-29.9
-29.1
-36.2
-41. 2
-30.3
-31.6
7460.
2115.
172.
77 .6
65.7
""*
iIl""l
.02
.02
.02
.02
.02
.03
.03
.02
-.03
.02
92.5
124.
166.
50.9
61.8
68.0
62.5
79.6
55.6
161.
64.3
-.09
727'.
.02
.01
-.()2
.03
84.5
72.0
161.
59.8
\..LJ
U
U
ex:
<C
W
IQ
\..LJ
U
z:
-:n.o
-30.5
-32.0
-32.8
-37.8
-33.5
-33.3
-31.3
-32.8
-34.4
-32.8
-32.8
-38.5
-34.8.
~
.....
ex:
<C
~
.....
:::-
a:l
Ie"*
.00
.00
.00
· 00
"""
""lie
*""
"lie"
-.00
"**
-.00
.00
· 00
.00
Ie,,"
"*"
"'''lie
*,,*
*,,*
"""
"'''''
"""
""Ie
Ie,,"
Ie,,"
***
"*"
Ie**
*flle
1l1e*
""Ie
"*,,
.00
.00
.00
· 00
.00
.00
.00
.00
.00
.00
.00
.00
.00
.01
;01
z:
~
...a:
:::-
a:l
::0-
1.'69
- .01
- .01
-.01
-.02
-.02
<C
1.87
2.05
2.43
2.81
3.21
1.50
1.00
2.59
2.13
1.79
2.43
1. 99
1.76
1.75
1.62
1. 79
2.48
1.79
1. 75
1.41
1. 70
1. 32
1.74
U
Z
U
a::
~
.....
LoU
\..LJ
U
...
-.01
-.01
-.01
-.01
-.01
-.01
- .02
-.01
-.01
- .01
-.01
-.01
- .01
-.01
-. 01
-.01
-.01
-.01
-.01
e:t
<C
3.90
4.10
4.28
4.63
4. ~11
5.15
3.68
3.18
4.71
4.38
4.08
4.63
4.26
4.06
4.00
3.89
4.06
3.75
4.04
4.04
3.72
4.06
3.68
4.05
Variances for 24 estimators of location based on 1000 Monte Carlo replications with first order autore9ressi~e data. Note that
a value larger than 9999.99.
....<C
V'l
a::
c::
<C
.....
:::-
W
.12
.12
.12
.12
.12
.12
.12
.11
.12
.12
.12
.12
.12
.12
.12
.12
.12
.12
.12
.12
.12
.12
.11
.12
•
57.0
57.5
S8.0
59.1
fJl1.0
60.7
56.~
54. :1
5:1.4
9;' ·1
57.7
59.4
58.2
57.S
57.9 I
57.3
57.9
59.1
57.i
57.4
56.9
57.8
50.8
57.8 I
*** means
22
5.
Sensitivity Curves.
Tukey (1970) proposes the use of sensitivity curves
to study the finite sample behavior of estimators.
These sensitivity curves
are similar to the influence curves proposed by Hampel (1968).
In the
PRS 3
stylized sensitivity curves are computed by the following algorithm:
i.
Generate the 19 expected order statistics from a normal sample of
size 19
ii.
To these 19 values add a moving point x and compute the estiIr.ator:,
T(x).
iii.
Plot 20T(x)
as a function of x.
Stylized sensitivity curves are produced in the
estimators including
and so on.
trinID~d
PRS
for a variety of
me&iS 9 I1-estimators 9 hampels 9 skipped estimators
lVe reproduce some additional sensitivity curves.
Again we defer discussion tmtil section 6.
l"~
N
FIGURE 5.1.
Sensitivity curve for 10% symmetrically trimmed
mean, 10%.
~
o
<:)
I\)
~
o
<:)
co
<:)
<:)
I
<:)
co
<:)
<:)
I
I\)
..p
<:)
I
~
<:)
<:)
I
I
·----t---:---~-;--~_:_;;;_-~I~
o.
-5.00
I
-3.75
I
-2.50
I
-1. 25
I1. 25
i
"~.)~
~r
I
'3.75
5.00
24
OO'SOO'E>OU'1"
00'1
OO'S
OO'S
l
r
I
.
+
I
-+
1
-----.-+1------1
i
(Jl
Cl
Cl
I
w
--.I
(Jl
"
'"rn
.......
c
;0
(J1
N
Vl
I
f\)
(Jl
C)
m
='
III
......
c-+
......
<
......
c-+
~
(')
I
>-"
f\)
(J1
c
-s
<
m
--+,
0
-s
3
m
Ql
Cl
::::s
3:
f\)
(J1
f\)
(Jl
<J
U1
o
Cl
Lon
N
FIGURE 5.3.
Sensitivity curve for Mestimate, k=1.5, D15.
-+:
C)
o
f\)
-+:
o
o
co
o
o
J
o
co
o
o
I
f\)
-+=
o
1
\
:" -5. 00
o
o
I
-3.75
I
-2.50
I
-1. 25
I
o.
I
1. 25
I
2.50
I
3.75
I
5.00
\0
N
FIGURE 5.4.
Sensitivity curve for l-step hampel estimate, 25A.
-+:
o
o
l\)
..p
o
o
co
o
o
I
o
co
o
o
I
l\)
..p
o
I
..p
o
o
\
I
I
I
I
I
I
,
I
-5.00
-3.75
-2.50
-1. 25
o.
1. 25
2.50
3.75
5.00
r--N
FIGURE 5.5.
Sensitivity curve for mildly adaptive hampel estimator, ADA.
-+:
o
o
I\)
...p
o
o
00
o
o
I
o
00
o
o
I
l\)
...p
o
I
\
:+= -5.00
o
o
I
I
I
I
I
I
I
I
-3.75
-2.50
-1.25
O.
1.25
2.50
3.75
5.00
28
S9" 8-"
I
Sf. "1-
I
SL "1
I
Sg"g
I
<.Jl
o
o
1
c..u
-..J
<.Jl
"T1
......
G'>
c::
I
;:0
rrl
U1
(J1
f\j
0
'"
VI
(I)
:::::s
.-.I
1'0
(J1
....
....
<
VI
M-
......
M-
'<
(')
c:
~
0
<
(I)
-+,
0
~
M-
=s
(I)
0
r--.~
c:
M-
1'0
(J1
(I)
~
I
3
(I)
OJ
:::::s
0
f\)
U1
C>
c..u
-..J
(J1
3:
0'.
N
FIGURE 5.7.
Sensitivity curve for Hogg adapter using OM, Mand trimmed means, HG1. Notice
that because of the "regular" spacing of the quantiles, this styl ized curve does
not reflect the adaption to OM.
-p
o
C)
ED
~
o
<:)
co
o
o
1
o
co
o
o
1
ED
..p
o
1..p
o
o
-5.00
-3.75
-2.50
-1.25
o.
1.25
2.'30
I
I
3.75
5.00
C
l"':
FIGURE 5.8.
Sensitivity curve for Hogg-type adaptor based on trimmed means, HG3.
""'o "
o
f\)
..p
o
o
ro
o
o
I
0.
ro
o
o
I
f\)
.-!"
o
1. -1
.
o
o
·-5.00
I
-3.75
I
-2.50
I
-1.25
I
O.
I
1.25
I
2.50
I
3.75
I
5.1)0
......
l"'l
FIGURE 5.10.
Sensitivity curve for a Hogg-type adaptor using hampe1s, 1.81*A.
-+=
o
o
I'\)
..p
o
Cl
CO
o
C)
1
o
co
o
o
I
I'\)
...p
o
I
--t
:-P -5.00
o
o
I
I
:
I
I
I
I
I
·-3.75
-2.50
-1.25
O.
1.25
2.50
3.75
5.00
N
t"l
FIGURE 5.9.
Sensitivity curve for Hogg-type adaptor using hampels, 1.95A.
~
o
o
ID
~
Cl
~
=
co
C)
,
Cl
;l
Cl
I
ID
.~
=
j
...p
o
C>
I
I
I
I
I
-5.00
-3.75
-2.50
-1.25
O.
:~
I
1. 25
I
'2.S 0
i
3.75
.5. __
t"'l
t"'l
FIGURE 5.11.
Sensitivity for 1-step rank estimator with uniform scores,
start = adaptor, RUA .
. .p
C>
o
I'\)
.-p
o
C>
co
o
C>
I
C>
co
o
C>
I
I\,)
.-p
o
J
~
C>
C>
-5.00
-3.75
-2.50
I
I
-1.25
O.
,
'... 25
c.,
~_
3. 75
S.Q,]
~
trl
Sensitivity curve for 1-step, rank estimator with uniform scores,
start = 50%, RU50% .
FIGURE 5.12.
...p
<::)
Cl
f\.)
...p
Cl
=
co
Cl
=
J
=
co
=
Cl
I
10
-f"
Cl
I
I
~ -5.00
<::)
Cl
I
I
1
-3.75
-2.50
-1.25
I
o.
!
1.25
1
1
2.58
3.75
i
5.00
L.I')
tr.
FIGURE 5.13.
Sensitivity curve for normal scores rank estimator, RN. The estimator
was computed by a 12 step method of bisection using the median as initial
estimate. Use of other starting estimators will give slightly different
sensitivity curve.
-f"
a
Cl
f\.)
--I>
Cl
Cl
co
Cl
Cl
i
Cl
co
o
o
I
f\.)
-.j>
o
I
I
:-P -5. Q0
o
Cl
I
--3.75
I
-2.50
I
-1.25
I
0_
I
1. 25
I
2.50
I
3.75
I
5./)0
36
6.
Discussion and Conclusions. Tables 4.1 and 4.2 contain Jnany estimators
with fairly comparable variance or pseudo-variance.
In order to JJ1.ake the
better estimators more apparent, we have constnlcted an additional table,
Let Ti , i = 1, ..• ,1000 represent the 1000 observations of an
estimator of location, and let
Table 6.1.
be the sample variance. Assuming the T i 's are approxll1ately nonnal, then
looosi
---~2~ is distributed approximately as a x2 with 999 degrees of freedom.
°T
Accordingly, the variance of si is .0019980* and the variance of 20si is
.79920i as pointed out earlier.
An estimate of the standard deviation of si
can be found by calculating {.001998 s~ = .0447S~.
Table 6.1 was constructed
by calculating the estimated standard deviation for the estimator with the
smallest variance.
The variance of any estimtor, whose estimated variance
was no more than one standard deviation from the minimal variance, was replaced
with a plus sign
l~lile
the variance of any estimator, whose estimated variance
was more than 5 standard deviations from the minimal variance, was replaced
with a r.linus sign. All others were left blank.
The resultant picture, Table
6.1, leaves a clear impression of the consistently stronger estimators.
In terms of the 4 ligllt-tailed alternatives the estimator of choice appears
to be the outer mean.
From Tables 4.1 and 4.2 we observe that HGl, RN and
also perfOl1n creditably but significantly more poorly than
varaince also suggests that the mean,
Iii,
a~.
J~H
The pseudo-
is not too bad.
We feel that the uniform is a highly artificial (notional in
PRS)
situatim,
The contamination of a normal by a distribution whose support is a bounded intel
val seems less so.
lVhile this very small selection of sllort-tailed alternative:
is inadequate for sweeping comnittments to certain types of estimators, it is
very suggestive of what is reasonable.
37
-- -- '0-0
~
%
M
~
a
w
I.J.J
Cl
~
u
0
%
1
2
S%
10%
3
19%
4
25%
S
38%
6
7
50%
~I
8
OM
9
10
010
11
12
13
14
15
16
DIS
020
12A
17A
21A
22A
25A
17
ADA
18
19
20
HG2
21
22
23
24
2S
26
27
28
29
30
:51
3~
HGl
HG3
HG4
HGS
HG6
l.81A
1. 81'itA
1.90A
1.95A
2.00A
1.810
1.900
1.9SD
1.60D
33
1. 75D
34
G\S
A."IT
35
:56
37
H/L
38
RN50
RN
39 . RNA
40 Rl.ISO
41
RUA
42 J,0H
5T4
43
+
0
~
~
%
-
..,
%
0
+
0
+
:::>
~
:::>
-
+
-
-
-
-
-
--
_
~
en
_
- -Ci Ii
10
0
l~
z+
Z
0
~
z
IN
I
+
z:
.
!
+
+
-
-
_
en
I~
Z
~
0
~
:::>
+
-
_
..,
+
+
...
en
en
~
a
~
a
0
~
Z
0
..,
~
~
+
z:
z:
+
+
...
0
en
Z
+
~
~
Z
Lrl
~
a
~
Z
N
...0"
+
z:
-
0
0
Z
0
.
~
a
~
z
..,
~
N
z+
z+
z:
-
-
-
+
+
-
0a
+
+
-
...
-
-
...
+
... - - +
- - - - +
- - - - - +
- - - +
+
- - - - - +- ...- - - - ... + +
- +
- - ... ... + + ... ...
+
... ...
- ... +
+
+
+
+
- ... + +
+
+
- - + ...
... ...
... ...
+
-
-
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -- -- -- -- - - - - - - - - - - - - - - - -- -- -- - - - -
+
...
...
...
+
...
~
+
+
+
+
...
...
...
...
+
-
-
+
...
...
...
...
...
+
+
...
+
...
+
...
...
+
+
...
+
+
+
+
+
+
...
+
+
+
+
...
...
+
...
...
...
...
...
+
+
...
+
+
+
+
...
...
...
+
...
...
+
+
+
+
- ...- - +-
+
...
...
+
+
...
...
+
+
...
+
+
+
+
+
+
...
+
...
...
...
...
...
...
+
+
+
+
...
+
...
+
...
+
+
+
+
...
...
-
0
~
z+
...
Z
+
+
+
-
--
...
...
...
...
-
0
LO
+
?
-
+
+
+
+
...
0
~
lu 12:
+
en
N
+
I ==
I
- I
+
+
+
-
- -
-
...
-
...
II
~
--...
z
-
-
+
-
I
I
,
-
-
-
-
+
+
+
+
+
+
...
...
...
+
+
...
+
+
...
...
...
...
...
+
I,
I
...
+
+
...
...
...
+
+
+
...
+
+
-
+
+
...
...
...
...
I
-
+
...
+
+
+
-
-
...
...
+
...
+
- ... +
- ...- + - ++
... -
...
+
-
...
+
...
-
-
-
...
---l
VARIANctS
Table 6.1.:
I
I
I
- ...
+
I
...
+
...
i
...
+
+
-
~
--...
z:
+
+
+
...
...
I
I
+
+
-
+
+
...
...
M
u
...
-
+
...
+
-
+
+
~
...
lu IIg
Deviations of the variances of -l3 estimators of location. A plus, .... means the
_dance of the estimatoT is within one standard errOT of the mininn.ml variance
while a minus. -. me:ms the variance of the estimator is rore than 5 standard
erTors away from that of the smallest variance.
00
l"l
c(
0
N
O
N(O,l)
N+.01N(0,9)
N+.025N(0,9)
N+.05N(0,9)
N+.10N(0,9)
N+.15N(0,9)
N+.25N(0,9)
N+.05N(0,100)
N+.10N(0,100)
N+.25N(0,100)
N+.03C
N+.10C
N+.50C
C
N+.10N/U
N+.25N/U
-.010
-.030
-.009
-.008
.007
.020
.030
-.034
-.142
-.233
-.018
-.025
- .119
-.278
-.025
- .094
«
........
,....
«
,....
«
N
«
l!"l
N
N
N
-.029
-.069
-.034
-.024
.000
.013
.062
-.017
.007
.065
-.044
-.016
.012
.031
-.008
.007
-.019
-.030
-.017
.000
.007
.020
.025
.000
.015
-.033
-.018
-.008
.006
-.078
.008
.013
-.047
-.050
-.051
-.016
.000
.000
.000
- .017
.015
.013
- .044
-.033
_.023
-.044
-.008
.007
.000
.000
.009
.008
.007
.007
-.029
.009
-.007
-.179
.009
.017
-.083
- .172
.017
-.007
«
0
0:(
,....
ofC
00
CO
r--
.
«
0
«
0
N
.
~
<
,....
r--
r--
-.029
-.040
-.026
-.031
-.028
-.020
.015
-.026
-.029
.120
-.026
-.033
.000
.095
-.025
-.032
-.019
-.020
-.017
-.016
-.007
.000
.010
.000
.007
.037
-.009
-.008
.006
.015
.008
.000
-.019
-.030
-.034
-.016
-.014
-.013
.020
-.026
-.029
.150
-.026
-.033
.029
.153
-.017
- .013
.000
-.010
-.009
-.008
-.007
.007
.000
-.008
.000
.051
.000
-.008
-.006
.003
.000
.000
0
.
.010
-.040
-.017
.000
.000
.013
.035
-.017
-.029
.083
-.026
-.025
.000
.105
-.008
.020
!-
;;.;:
-J
.......
o:::t
t-
c:(
::I:
l!"l
-.084
-.059
-.017
- .008
.052
.013
.020
.000
.015
.007
.000
-.030
.000
-.016
.007
.007
.020
.050
- .179
-.294
-.047
-.069
-.051
-.062
-.056
-.032
.015
-.067
-.029
-.214
-.018
-.008
.006
-.039
.008
.000
-.018
-.033
-.144
-.322
-.041
-.129
-.069
-.041
.047
.098
- .049
-.013
VARIANCES
Table 6.2.:
Logarithm of the relative efficiency. Efficiency in this table is computed as the ratio of the variance of
1.95A to the variance for various alternatives.
0'.
t")
I
a
c:::x::
,.....
0
r-
N
N(0,1)
N+.01N(0,9)
N+.025N(0,9)
N+.05N(0,9)
N+.10N(0,9)
N+.15N(0,9)
N+.25N(0,9)
N+.05N(0,100)
N+.10N(0,100)
N+.25N(0,100)
N+.03C
N+.10C
N+.50C
C
N+.10N/U
N+.25N/U
-.028
.021
.000
.017
.058
-.007
.010
.000
-.163
- .472
-.016
-.064
-.372
-.301
-.060
-.017
-.028
-.051
-.057
.052
.007
.028
.010
.008
-.035
-.037
-.038
-.032
-.006
.050
-.043
-.028
c:::x::
rN
-.019
.021
-.033
.008
.043
.063
.031
.031
.014
- .143
.056
-.024
-.034
-.027
.018
-.077
c:::x::
N
N
-.028
-.021
-.095
.017
.043
.042
-.064
.008
-.021
-.081
.090
-.032
.018
.066
-.017
-.028
I
I
c:::x::
c:::x::
N
c:::x::
0
Ll")
.000
.010
.025
.000
.007
.021
-.054
.000
-.042
-.348
.016
.016
-.079
-.076
.045
-.061
-.009
-.010
-.041
-.073
-.021
-.053
.000
.039
-.014
.068
.016
-.008
-.034
.241
-.026
-.098
I
c:::x::
c:::x::
c:::x::
c:::x::
co
r""co
0'1
a
a
r-
r-
r-
N
r-
-.019
.000
-.041
.008
-.007
-.027
.015
.031
-.062
-.032
.064
-.024
-.006
.050
.018
-.006
-.028
.000
-.041
.000
-.034
- .040
.010
.000
-.055
.700
.048
-.094
.073
.195
-.084
-.039
a
.
.000
.010
-.041
-.025
-.007
-.027
-.015
.000
-.035
.033
.000
.000
-.023
.000
-.026
-.011
.
-.019
.021
-.033
.008
.043
.042
.020
.071
-.048
.068
.056
-.024
-.068
.148
-.034
-.039
I"':"
~
.112
-.031
-.033
-.008
.014
.085
-.015
.031
.000
-.127
.048
-.016
-.095
.100
.018
-.087
...J
::I:
.029
.065
.017
.070
.043
- .040
-.035
.000
-.163
-.495
.000
- .048
-.343
-.350
-.043
-.077
o:::t
ILl")
-.056
-.021
-.025
.000
-.028
-.097
.031
-.045
-.048
- .191
.048
-.024
-.081
.192
- .116
-.039
PSEUDO-VARIANCES
Table 6.3.:
Logarithm of relative efficiency. Efficiency in this table is measured as the ratio of pseudo-variance of
1.95A to the pseudo-variances for various alternatives.
40
Just as drronatic as the
Of1~s
good performance in the short-tailed situa-
tions is its bad performance in every long-tailed situation.
Table 6.1 also
distinguishes several classes of strong perfonners in heavy-tailed situations.
As a class, the hampels, numbers 12 through 17 appear strong in spite of the
appearance of several minuses.
The adaptive hmrrpels, numbers 24 through 28
also appear very strong with the added bonus of no minus signs.
Single esti-
mators worthy of mention here include D20, NIT, H/L, RNA and RUA.
The estimators, RNA. and
adaptive initial
est~~tor.
are one-step ran.l( estimators with an
RUA~
There seems to be evidence that the relatively
good perfonnance in these cases is due to the relatively good. initial estimator
rather tharl the one-step rank estliaator itself.
estimators,
R1\JSO%
For
example~
the corresponding
and RUSO%, based on the median as an initial estimator do
not perfonn particularly well.
Also notice in Figures 5.11 and 5.12, the sen-
sitivity curves portrayed there are essentially those for the initial estimator,
FaT
this reason ~
Vie
excluded EN.l\. and RUA from the following comparisons.
In the PRS, the concept of deficiency is introduced in order to provide a
comparison of two estimators.
The efficiency of an
estll;~tor
under test rela-
tive to a standard estimator is defined by
efficiency =
varia...Tlce of standard
variance
and the deficiency by
deficiency
=1
-
efficiency.
Notice that a negative deficiency means the estimator is more efficient than
the standard.
Thus deficiency is centered at 0 with negative meaning the
41
stEm6Jrd is less efficient and positive meaning more efficient.
We feel the advantage of the zero reference point is outweighed by the
following anomaly.
~
In a sense, efficiencies of 2 and
mean the same thing
(interchanging the roles of the standard estimator with the test estimator).
The corresponding deficiencies of
~
and -1 are not syrrmetrically located
about 0 and, on the intuitive level, not apparently related.
Rather than compute deficiency? we have computed the natural logarithm of
the likelihood ratio which also has a 0 reference (for efficiency 1) and is
symmetrical in the sense that
~n2
= -~nh.
The logarithm of the efficiency
also has the advantage that in order to shift
tion is necessary.
Tables 6.2 and 6.3 give
standards~
logarith~
only a simple subtrac-
of efficiencies for a
variety of the better perforning estimators relative to 1. gSA.
To illustrate
the change of standard, the log efficiency of D20 relative to 1.95A under
Cauchy alternative is
1.9SA is
-.278 while the log efficiency of 1. 8l*A relative to
.153. Thus the log efficiency of D20 relative to 1.8l*A is just
-.278 - .153
= -.431.
Note that a negative log efficiency means the standard
is more efficient while a positive log efficiency means the standard is less
efficient than the test estimator.
In Tables 6.2 and 6. 3~
unifornJy the best.
adaptive hanpels.
1. gSA ''las used as a standard although it was not
However ~ we feel it adequately represents the class of
For the mOElent? let us consider Table 6.2 based on variances
When compared to D20, ADA, ElL and ST4>
that there are more negatives than positives.
1. gSA
dominates in the sense
In these cases, the magnitude of
the negative log efficiency is generally much greater than that of positive
log efficiency.
For example, the largest negative log efficiency for D20 is
42
-.278 while the largest positive log efficiency is only
exception to this rule in these cases is the positive
point out that relative to
1. 81 *A~
.030.
.120
The only clear
for ADA.
We can
ADA does not exhibit this behavior.
When compared to the hampels, the adaptive hampels do not usually dominate
in terms of sign.
However ~ the worst case behavior still applies.
25A has a positive loe efficiency in 10 of the 16 cases.
positive log efficiency is only
ciency is
- .179.
For
.017
~n1ereas
For example:i
However, the maximum
the smallest negative log effi-
22A the maximum positive log efficiency is
the worst negative case is
.015 while
-.051, and so on.
Table 6.3 is based on pseudo-variances,
AgaiIl the typical pattern is that
the adaptive harnpels do not unifonnly dominate the other
estirnators~
but their
worst case bel1avior is usually considerably better than that of the other estimators.
To
sW~1marize,
ciency in SOrle
with adaptive
cases~
hampels~
one may lose a slight bit of effi-
but gain a rather large amount in others.
we say that if hampels are
good~
adaptive hampels are better.
There is also some question in our minds whether QI and Q2
suitable measures of heavy-tailedness.
higher efficiencies.
tails,
In this sense
are the most
Other more precise TIleasures may yield
Also we feel work needs to be done in adapting to light
A more sensitive measure of light-tailedness may improve the efficienciE:.
there also.
Finally, vie mention the possibility of adaptive estimators in a
tim.e series context.
In regard to adaptive estimation,
to the
formulated~
PRS.
Properly
Vie
must take exception
blata"'1tly adaptive estimators perform at
least as well as non-adaptors for sample sizes less than 40.
43
Sensitivity curves for a variety of estimators were produced in the
Figures 5.1 through 5.4 reproduce for completeness curves fmm.d in the
Our
PRS.
PRS.
figures 5.3 and 5.4 contain very slight irregularities in their central
portions which we attribute to the use of the mediaTl as a starting estimator
in the one-step procedures.
which was not given in the
Figure 5.5 is the sensitivity curve for ADA
PRS.
Figtrre 5.6 is the outer mean I s curve.
that it is V1insensitivel! to middle observations.
curve for the Hogg 4 way adaptor 7 HGI.
Notice
Figure 5.7 is the sensitivity
The point of interest here is that
there is no i1insensitivityl1 to middle values.
Thus 7 in this stylized plot 7
HGI never has an opporttmity to adapt to ON - the tails of the nonnal quantiles being too long to allow this adaptation.
In general one may suspect that
tIle concept of stylized sensitivity curves is not appropriate to adaptors.
This is true of those adaptors appearing in the
PRS
also.
With these comments we remark that figures 5.8 through 5.10 represent
other adaptive estimators.
Figures 5.11 and 5.12 are of interest as mentioned
before because they are essentially the sensitivity curves for a hampel and for
the median.
slow
It appears to us as though the convergence of rank estimators is
and~ hence~
that one-step raJlk estimators are unsatisfactory for this
reason.
Figure 5.13 portrays the sensitivity curve for a normal scores rank estil:Jator.
The general raggedness again suggests slow convergence.
of 12 steps was used.
Here a maximum
Possibly ~ more steps would have ir.Iproved the performance
of this estimator and the appearance of the curve.
Finally we may address ourselves to Table 4.3 and the time series alternative.
A word on the design of the sampling situations is in order.
For the
44
first order autoregressive model
a negative value of p guarantees a spectrum domLlated by high frequencies.
In this circumstance observations will occur on both l1 sides i i of the center.
For positive values of p low frequencies dominate.
particularly for
p
Hence for positive p,
close to 1 the time series could make long excursions on
one nside of the center.
li
Thus the difficulty in location estimation in a time
series occurs when p .. is positive and close to one or more generally, when the
time series is close to nonstationarity.
Gastl'Jirth and Rubin (1975) discuss
robust estimation in precisely this situation.
In particular, they derive effi-
ciencies of several estimators relative to the mean M. Unfortilllately as Table 4.3
clearly demonstrates M is on the whole illlsatisfactory.
It is our assessment
that no estimator presently fashionable is very satisfactory LT1 the presence of
positively correlated data.
A more complicated time series situation might have been to examine a
time series which is contaminated by (possibly) uncorrelated observations.
The:
geneYal perfonnance as seen in Table 4. 3did not seem to warrant this investigation , however.
45
List of References
Andrews, D. F., Bickel, P.J. ~ Hampel? F. R., Huber, P.J., Rogers, W.H., and
Tulcey, J.W. (1972), Robust Estimates of Loaation: Survey and Advanaes~
Princeton U. Press, Princeton, N.J.
Bickel, P.J. (1975), One-step Huber estimates in the linear model,
Statist. Assoa., 70, pp. 428-434.
J.
Am.
Box, G.E.P. and lV'lU1ler, M.E. (1959), A note on the generation of random nonnal
deviates, Ann. Math. Statist., 29 3 pp. 610-611.
Gastwirth, J. (1966), On robust procedures, J. Am. Statist. Assoa., 61, pp. 929948.
Gastwirth, J. and Rubin, H. (1975), The behavior of robust estiIllators on dependent data, Ann. Statist., 3, pp. 1070-1100.
Hampel, F. (1968), Contributions to the Theory of Robust Estimation, Ph.D.
Dissertation, Berkeley.
Hampel, F. (1974), The influence curve and its role in robust estimation,
Am. Statist. Assoa., 69, pp. 383-393.
J.
Hodges, J.L. and Lehmann, E.L. (1963), Estimates of location based on raru<
tests, Ann. Math. Statist., 34, pp. 598-611.
Hogg, R.V. (1974), Adaptive robust procedures: a partial review and some suggestions for future applications and theory, J. Am. Statist. Assoa., 69,
pp. 909-923.
Huber, P.J. (1964), Robust estimation of a location parameter, Ann. Math.
Statist., 35, pp. 73-101.
Jolms, 11. V. Jr. (1974), Nonparametric estimators of location,
Assoa., 69, pp. 453-460.
J.
Am. Statist.
Kraft, C. and Van Eeden, C. (1969), Efficient linearized estlli~tes based on
ranks, Proaeedings of the First International Symposium on Nonparametria
Statistias, Cmrrbridge U. Press, pp. 267-273.
Paley, R.E.A.C. and Wiener, N. (1934), Fourier Transforms in the CompZex Domain,
American Hathematical Society, Providence, R.I.
Parzen, E. (1962), On the estimation of a probability density flIDction and mode,
Ann. Math. Statist., 33, PP' 1065-1076.
Schweder, T. (1974), Window estimation of the asymptotic variance of rank
estimators of location, unpublished manuscript.
© Copyright 2026 Paperzz