Carroll, Raymond J.; (1976)On sequential elimination procedures."

* This
research was supported by the Air Force Office of Scientific Research
under Grant No. AFOSR-75-2796.
,
On Sequential Elimination Procedures
by
Raymond J. Carroll
Department of StatistiQ8
University of North CaPoUna at Chapel, HiU
Institute of Statistics
~tUneo
July, 1976
Series
t~.
1078
On Sequential Elimination Procedures
by
Raymond J. Carroll*
Depaptment of Statistics
Univepsity of Nopth CaroZina at Chapel, HiZZ
Summary
The asymptotic properties of a general class of nonparametric sequential
rarlicing and selection procedures which possess an elimination feature is
studied.
If the correct selection probability is to be at least I - a
and the length of the indifference zone is 6., different results are
obtained as a
+
0 depending on whether one assumes 6. fixed or 6.
+
O.
A Monte-Carlo study confinns the superiority of elimination procedures.
* This
research was supported by the Air Force Office of Scientific Research
under Grant No. AFOSR-75-2796.
Key Words and Phrases:
M1S
Sequential Analysis, Elimination Selection Rules ~
Asymptotic Distributions.
1970 Subject Classifications: Primary 62F07; Secondary 62L12.
2
Introduction
In the large literature on sequential ranking and selection
Kiefer and Sobel
(1968))~
(Bechllofer~
surprisingly little work has been devoted to
procedures which are asymptotically nonparametric in nature and which
eliminate obviously inferior populations early in the experiment (see
Swanepoel (1976) for a recent approach).
It is the purpose of this paper
to propose and study a general class of sequential eliminating procedures
which includes as special cases SwanepoelYs rule as well as a competitor
to a noneliminating rule mentioned by Wacker1y (1975).
We take the indifference zone approach that the best and second
best populations are
~ (~ ~ ~
0)
units apart and attempt to guarantee
a correct selection (CS, the selection of the best population) with probability at least 1 - a.
Suppose N is the number of stages in the experiment.
As a sample of our results, we show (Theorem 1) that a nonned version of N
looks very much like a linear combination of the "mean" and ilvariance19 on
which the selection is based.
Surprisingly, the conclusions differ as
a-+-O depending on the cases
~
fixed or
1.1
-+- 0, which eventually leads
us to the conclusion that there truly is a cost of ignorance in estimating
the variance.
The class proposed includes many procedures in the literature,
including one which asymptotically takes at most
~
•
the observations
needed by the Robbins, Sobel and Starr (1968) procedure. The theory
is investigated in a convincing
~bnte-Car10
study.
In the general problem, we have k populations TIl' ••• ,TIk and
a sample XiP XiZ' ••. from TI i with distribution function Fi
3
Statistics Tin are formed from
2
~i' cri~
1,
rl2 (Tin - ~i)/cri
••. ,Xin ~ and for SOlne constants
is asymptotically normal. Consistent estimates
Xil~
crfn of cri are assumed to be available. Assume without loss of generality
~l ~
that
.••
~ ~k
the largest value of
assumes
we
~k
and that it is desired to select the population with
~i.
- ~k-l ~ 6..
The indifference zone approach adopted here
Let h be a given nonnegative ftmction.
propose to eliminate population
if there is a population
1Ti
Then~
at the nth stage of the experiment
still in contention at the nth stage for
1Tj
which
Tjn - Tin ~ h(a,n)(crin + crfn)~/~ - 6..
The experiment stops at the Nth stage when only one population remains.
As
we will show, these el:1.mina.tion stopping rules contain those proposed
by Swanepoel 'c!976) , and Swa.'1epoel and Geertsema (1976),
as well as
an eliminating version of Wackerly's (1975) rule.
When k = 2 ~ the number of stages in the experiment becomes
2
2 )~/
T2n-Tln-(~Z-~1) ~ h(a~n)(crln+cr2n
n!a - 6. - (~2-~1)''
or
TZn-TIn - ( ~2-~l )
Letting d
= 6.
+
Z 2 )~/n~
~ -h (a~n)(crIn+cr2n
+ 6. - (~2-~1 )"
~2 - ~l ~ Tn = T2n - TIn - (~2-~1) and cr~ = crin + cr~n '
this is
Tn
N
l
.:-
= inf n~l:
!.:
~ h(o:.~n)crn/n2
or
- d
4
We are going to investigate Nl as a.,d + 0, and in Lemma 3 we present
conditions under which Pr.{CS}-·+ 1 as· 0.-+ 0 (a,d-+ 0). Thus, for
and finding constants
purposes of computing the distribution of Nl
n
for which N1/n ~ 1, it will suffice to consider
Asymptotic Normality of N
The standing assumptions of this section are that
k
(n 2IJ'n'
n~ (crn2 - cr2) )
are jointly asyrrptotically nonnal, unifonnly continuous in probability
k
(Anscombe (1952)), and n 2IJ'n = O(logzn)
(a. s.).
These conditions hold
for the sample mean, M-estirnators and linear functions of order statistics
(and their variance estimates) under a variety of conditions (Carroll
(1975)) .
Theorem 1 Suppose as ex"" 0,
h(a,Nj
(2.1a)
h(~,
+
N-l) ....
00
·(a.s.)
° (a.s.)
(2~lb)
h(a.,N), -
(Z.lc)
There exist constants nO. (d)
for which N/na.(d)
E....
1.
Then for some constant A(F,d), as a. + 0,
(2.2)
(2.3)
(cr 2h 2 (a,N) - d 2N) (4d2na. (d)) -~
= ~(TN
-
d(cr~ - cr 2)/2(12)
(cr2h 2 (a,N) - d 2N) (4d2na.(d)) -~ ~ N(O, A(F,d)).
+ opel)
5
Proof of Theorem 1 By (2 .Ia) , N -+
(a. 5.) so that
00
(2.4)
1
1
Now, by definition, ah(a,N) - dJ'.f2 ~ N~N - h(a,N) (aN - a). Also,
h(a, 1~-I)aN_l ~ (N-I)~(TN_I
+
d), so a little algebra together with the
fact that unifonn continuity and (ZoIc) imply i{2(TN - TN-I) ~ 0,
tf2(aN
aN-I) ~
-
yields
0
l~arN - h(a,N) (aN
- a)
~ ah(a,N) - ~
+
aN_lh(a,N) ((1 - l/N)-~ - 1)
l/N)-~aN_I(h(a, N-l) - h(a,N))
+ (1 -
= ah(a,N) -
1
&~~ + opel)
+
Opel)
(by (2.lb) and (2.4)).
Now, (Z.lc), (2.4) and the uniform continuity of an show that
h(a,N)(ar~ - a) - ~(a~ - ( 2)/2a2 = opel), yielding
ah(a,N) - ~
= I~(TN
-
d(a~ - ( 2)/2a2)
+
opel).
1
Hence ah(a,N) - dN~~ is asymptotically normal and (2olc) and (2.4) thus
show
completing the proof.
o
The choice of h(a,n) below is suggested by Swanepoel and Geertsema
for the nonnal case with ai = • o. = a~ = O'~
if
~2
-
~l ~
6, Pr{eS}
Le:mma 1 Let h(a,n)
1 -
~(aa)
+
aa¢(aa)
~
= (a~
+
1 - a
+
(known) •
They show that,
o
c log n)~ , where c ~ 0 and aa satisfies
¢2(aa)/~(aa) = a. Define na(d) = (aaa/d)2
0
6
Then, as a
+
0,
N/n (d) ~ 1 a.'Tld
a
Further, ifa,d +_ Q in such a w!l¥ that _ (:lQRucl)/l3a +0,
theuresul~
still holds.
Through the case d fixed, Lemma 1 plainly shows the effect of estimatiiig
cl , an effect
are i.i.d.
disguised in the case d
N(O,l),
Tn
d(N-na(d)) L
--"';"""t.;- -+-
2n (d)
a
cr
2
N(O,l)
( d
2
=2
For example, if Xp Xz' •••
O.
= ~, cr~ = (n-l)-l r~
.s. N(O, 1+d2/2)
If we fix d
-+
and let a
-+
triples the variance of N.
+
(Xi - ~)2 , then
0 or setting crn
(d fixed?
=cr
)
cr2 unknown) .
0, we see that the lack of knowledge of
I-lence" there is a licost of ignorancel l
due to estiwating cr 2 .
Proof of Lemma 1
Recall Tn = O(n
-!.:
2
log2n)
(a.s.).
Since
and the opposite holds if N is replaced by N - 1, we have
(2.6)
If d is fixed, this shows na(d)
Theorem 1.
(2.7)
If d
-+
= CSacr/d) 2 .
0, \ve must show
(log
N)/I3~ -+
° (a.s.).
is the right choice in
7
Now~ if for some sequence 8~/d2N
But with probability approaching
+
O~ then (2.6) shows (log N)/8~ +
one~
00.
(f3~ + cJ2)O'2/(d~) ~ l~ which
would imply ca2/(d2~) ~ 1 and hence that
(log
N)/B~
$
(log CO'2 - 2 log
This contradiction shows that there exists
(a.s.) as
a~d +
O.
that N/n (d) ~ 1.
a
1 gives
Si.nce
O.
(f3~/d2N) ~ E:*
E:* > 0 for which
2
2
2
2
log N/Sa ~ O~ this completes the proof.
unknown variance.
2
+
This gives log N/Sa $ (log Sa - log E:*d )/8a + O~ so
Now, if d is fixed or d + 0 ~ the proof of Theorem
The next choice of
c
d)/f3~
o
h(a~n)
is motivated by the nonnal case with
Let b:i: 1 + (cd! 0') 2 ; St-Janepoel and Geertsema choose
= 2.
Lemma 2 Define
and
~
h(a~n)
= ~{(ta n)l/n - l}~/C~
- (arctan a)hr + a/(I+a
If as a,d
+
0 there exists
2
)'IT
= a/2.
Then
where c >
'ITOO +
E: > 0 for which de-log
O~ t a = (1 + a 2)2/2
4 a£
a)~-E:
a + O.
+
Let
,/
00, then
(2.8) still holds with the convergence to N(O, A(F,O)).
Proof of Lemma 2 First consider d
and
fixed.
Then, clearly N +
00,
h(a,N)
+
h(a,N)/N~ + d/O' (all a.s.). Thus, t exl / N + b (a.s.). Multiplying and
00
8
N-I) + h(a,N), (2.lb) will £0110\\1 if
h(a~
dividing by
This last tern is given by
~
(t N)I/N((1 - l/N)l/N - 1)
a
+ (t (N-I)) liN (1 - t l/N(N-I))
a
a
+ (ta(N-I))1/N-l(cN-lfl/N(N-1) - 1) •
Now,
t~N
-+-
b
rule that for
(a.s.), Nl/N
11 >
-+
1 (a.s.) and one shows by L'Hospital's
0, n~(lll/n - 1)
-+
0, giving (2.lb).
Since
(log b)N/log t a ~ 1, TIleorem 1 tells us tllat with na(d) = (log t a )/log b ,
--ll'J
Since N(NI
1
long as
l)/log N -+ 1 (a.s.) and a 4t a
(-log d)2/- log a -+ 0,
-
= 0(1)
so that
By
Lehmann (1959, page 274) and a little algebra,
we see that as
9
which gives the result.
First 1 since TN
this gives
-+ 01
(-log a.)/N
If d
t~/N
-+
-+
O.
-+
01
't'le
need only verify (2.9) above.
1 so that
(log ta.)/N
so by the Law of the Iterated Logarithm,
being a.s.).
~-c
Since de-log ex)
Thus~ d- 2{(tctN)1/N - I}
-+
2
TN/d
-+
-+
0
Since a.4ta. = O(l)~
h
2
we get d1'~2/(log N) -+ 00
-+
00,
O.
(all statements above
c 2/cr2 , i.e.,
2
Slllce the second terTII of this last equation is of the order (log N)/Nd -+ 0
(a.s.) ~ we have d- 2{tl / N - l} -+ c 2/cr 2 • From here, the steps of Theorem 1
a.
go through, although the algebra is a bit more complicated.
0
Denoting the centering constants in Lenma 1 by n(l) (d)
ct
.
and those
(2)
2
in Lemma .2 by na (d), we see that for c = 2 ~
lDn n~2)(d)/n~1)(d)
= 2(d/cr)2{log(1
+ 2(d/cr)2)}-1 •
ct-+O
the two choices of
TI1US,
h(ct~n)
Lemma 3 Define N* = inf{n: Tn
of the h(ct,n)
are not equivalent.
1
S
-h(ct,n)/n~ - (ll-b.)}.
in Lemmas 1 and 2, as d
Pr{N* > N}
Then~ using either
= II + b. -+ 01
-+ 1.
1
Proof of Lemma 3 Define NI* = inf{n: Tn
N* ~ 1\,1*.
Under Lemma 1 1
S
= II
+ b. ~ dO ' while if
in Lemmas 1 and 2.
Then
M*/N ~ 4, while under Lemma. 2, 1'4*/N
The upshot of Lemma 3 is that Pr{eS}
d
-h(ct,n)/n~ + d/2l.
d
-+
-+
1 when 1'1
-+
00
L.
4.
(a.s.) and
0, the same holds for the choices
h(ct,n)
0
10
The Ranking Problem
Returning to the ranking problem, the results of the previous section
will be illustrated for arbitrary k.
this paper that 111 ~ .•. ~ 11k'
Suppose dk-l/di
integer
i s Ie-I
(3.1)
max
ssi~k-l
+ ~i
11k - 11k-l ~
(0 ~ ~i ~ 1)
such that
~i
We asstmle throughout the rest of
and let
= 1.
and define
/::,.$
s
= s(t:.)
di
= t:.
+ 11k - 11i .
be the smallest
Assume
2 - o.2 - 0,2 )/2(0.2 + 0 2,) - (T k - T . - 11k + 11·) )
n~(2
d(o. + 0nk
1"
n1
1
l'C
1
n
n1
1
L
~
F.
The distribution of N? the number of observations taken on the selected
population, will be computed in the following cases.
Lerrma. 4 Under the conditions of Lemma. 1,
Proof of Lemma 4 If N is the time it takes for population k to
i
eliminate population i, Lemmas 1 and 3 show Pr{N = max Ni } + 1.
But for
s
~
i
~
k-l?
n
Ni/N ~ 1.
ssi~k-l
A multivariate extension of Anscombe's
o
Theorem 1 and (2.5) complete the proof.
Lemrlla 5 Define G(x)
of Lemma 2,
= F(bx),
where b
=1
2 2
+ c d
/0 2
•
Under the conditions
11
Proof of Lemma 5 By (2.10) and the argument of Lemma 4 we obtain
By a Taylor
expansion~
o
which with a little algebra completes the proof.
Other Choices of h(a»n)
The selection in Lemmas 1 and 2 by no means exhaust the possibilities
for the function
h(a~n).
For
example~
one could define
h(a~n) =
k
(rIn) 2 got (n/r) ~ where
and where r = I/IJ.a
(1 ~ a s 2).
and Carroll for the case a fixed?
This is the choice suggested by Swanepoel
d
~
O.
The analysis is essentially as
in Lemma 1.
Another possibility arises from the recent work of Wackerly (1975)
on noneli.minating sequential procedures. His idea is that there is a
vector
!!(Cl~IJ.~~)
of fixed sample sizes needed to guarantee a probability
requirement tmder the condition that the distributions are known up to
location parameters. The goal is to define a vector
~
Y
which satisfy ei~ei!!(Cl,!J.~V)
,.., ~ 1 as Cl ~ O? where e
of observations
= (l,l~ .•• ~l).
He is only partially successful in that the convergence is to c(IJ.,B)
with equality if and only if VI = •••
= llk-l = Vk
- IJ.. The following
~
1
12
remarks sketch very briefly an elimination rule which satisfies the original
goal.
We assume Fi (x)
is symmetric about 11i and define Yijn = Xin - Xjn •
Then large deviation theory shows that for i > j ~
lim n
~
-1
log Pr{n
-1 nt
L
p=l
YiJ-p < O} = A(E~ 11i-11J-)'
A(f,o)
= log ~f{eXP(tO/2)E exp{t(Y ijl
TIlUS, if the maxwal error probability desired is
M to
the correct sample size for fixed
and
C/,
~i
eliminate
- 11i + 11j)}}.
B~/-log
C/,
-+
1,
is
n
r
l
Define H -n = max(fl, InY - !) and Ht - = max(fl~ l11i-11k !).
iJ
J
p=l i J p
defined the natural estimate of A(F,o) which, for some ftmction
satisfies (if i
~
Wackerly
lJ1~
j )
A (F, H- - ) - A(F ~ H* -)
- h'*'
1JU
.....
1J
=n
r {ljJ (Y - .
-1 n
p=l
1JP
- 11- + 11·) - EljJ (y - -I - 11- + 11-)
-k
+ o(n 2)
1J
J
1
1
J
(a.s.).
The proof of this fact follows along the lines of Carroll (1976) and
the details are omitted for the sake of brevity.
TIlle eliminates
~i
at stage n
~ Y < 0 and n ~ BC/,2/
n -1 P~l
ijp
-{~(E~ Hijn)
-
if for some
-An (E~
ACE,
~,
h (c/',n) --,.B.C/,2/n
The elimination stopping
~j
still in
Hijn)~ .J..e., J.-f
k
Hrj)} ~ h(c/',n)/n 2
d -- A(E~ H*)
ij .
-
d~
contention~
13
If one wishes to analyze the number of stage N as a" 0, since Pr{CS} .. 1,
merely replace
i
and
j
in the above by k
and k - 1 and then Theorem 1
applies.
Moments of
N
We present a simple proof that EN/net(d)" 1 (confer Swanepoel
and Geertsema (1976)).
Consider the ranking problem with k
2
define 1'.1 = inf{n: h (a,n) (crin +
cr~n)
2
< rui }.
Then N
~
=2
and
M since if
N > 1"'1 were true, then
the final inequalities following from the definition of M.
if M/na(d)
is uniformly integrable.
EN/net (d) .. 1
By Bickel and Yahav (1968), it suffices
to show that for some etO > 0,
00
(5.1)
TIlliS
I sup Pr{M/net(d) > p} <
p=l O<et<a O
00.
14
Assume
(crin
+
<1~n) has bounded r th moments for n large (call this
bound cO); then (5.1) is bounded by (setting m = pna(d) )
r SL~
p=l O<a<a
2
00
o
+
PROPOSITION 1 If r > 1 and h2(a~n) = (S~
PROPOSITION 2 If r
~
2
2
aZm ) > rnA }
..
00
{Coh2(a,pna Cd))}r
~ L sup
2
p=l O<a<aO
pna(d)
Pr{h (a,m) (aim
2 and h(a,n)
+
clog n), then (5.1) holds.
is given in Lernnm 2, then (5.1)
holds.
l/n (d)
Proof of Proposition 2 t a a
rule shows that if n < 1,
+
b so an application of L'Hospita1's
unifomly in a.
Estimation by Sample Means
In the one-sided rule of Section 2, we have implicitly shown that
if Tn - 11 + T~ - 11 + o(n -~Z) = n -1 rn
£.1 1/J(Xi ) - E1/J(Xl ) + o(n -~) and M
is the stopping rJ1e for T~, then 1'1 alld N will typically have the
same asymptotic distributions.
If M -
p
N~
0,
h(a,n)
= Sa
N
One might conj ecture than that M- N ~ 00
~
2
(-2 log a)2, an = 1, and we neglect
15
overshoot, we obtain
the last following from the fact that
IN/Sex has a lir.lit.
Tn to be the Huber M-estimate (the solution to
If one defines
I~ l/J* (Xi - Tn)
=0
), one
can show by Taylor expansions that n(Tn - T~) ~ F, where P is nondegenerate. This shows that M - N ~ 0 is probably not true, although
E(M - N)
-+-
0 may hold.
r"ionte-Carlo
A simulation experiment was perfonned
when the sampling distribution for
'IT i
(k = Z,
was
a. = .10, ZOO iterations)
F (j) (x - 1-li)' where
= cI>(x) (the standard nonnal)
F(2)Cx) = .95cI>(x) + .OScI>(x/3)
p(3)(x) = .85cI>(x) + .15cI>(x/3).
p(l)(X)
In all cases,
1-12 - 1-11
=0
= 8 (= .25, .375, or .50) and the two cases
Z
(slippage configuration) and 1-IZ - 1-11 = 8 (equally spaced
1-13 - 1-I
means) were considered.
The statistics Tni'
cr~i were either the sample
mean and variance or a 10% trimmed mean and its winsorized variance estimate.
h(a,n)
= (S~
+ (1. 05) (log n)/r,;: ~ , where
observations M taken being tabulated.
m = 10 was the initial number of
I1Ratio to RS81 ' denotes the ratio
of M to the expected total number of observations taken by the Robbins,
15
Sobel and Starr procedure.
The results are clear.
usually exceeds)
.90
=1
-
Pirst, the elimination rule attains (and
(x,
its predetennined Pr{CS}.
Second, the
rule is much superior to the nonelimination rule, the superiority tending
to be more pronounced as
6. decreases.
Third, the triIilIned mean is more
robust than the sample mean to heavy tails (this is the situation p(3) );
the rules using the trimmed mean tending to take less than 75% of the
number of observations needed by the sanq>le mean.
TABLE 1
The lVbnte-Carlo experiment for
# of Observations
Pr(CS
on 7T:3
¢(x).
Total # of
Observations = H Ratio to RSS
Sample Mean
.95
20
53
.89
=6.
.96
17
46
.77
=0, 6.=.375
.95
33
84
.79
=6.
.96
27
67
.63
=0, 6.=.25
.92
61
156
.63
=6.
.98
54
130
.54
.96
21
57
.85
=1::.
.97
18
47
.71
=0, 6.=.375
.93
31
82
.69
=6.
.95
28
70
.59
=0, 6.=.25
.94
63
164
.62
=6.
.96
57
136
.51
112-111=0, 6.=.50
10% Trimmed Mean
112- ll1=0, 6.=.50
e
16
TABLE 2
The IYbnte-Carlo experiment for
#
Pr(eS)
.95~(x)
+ .05~(x/3).
Total # of
of Observations
on 7T
Observations = M Ratio to RSS
Sample MelL"1
.98
25
64
.77
.95
23
58
.69
.92
39
102
.69
.95
36
88
.59
.92
75
193
.58
.94
71
171
.51
.92
24
64
.77
=/1
.99
19
50
.59
=0, /),=.375
.95
34
89
.68
=b.
.96
30
76
.57
=0, A=.25
.94
76
189
.64
=b.
.94
S6
138
.47
lJ2-1Jl=O~ b.=.50
=A
=O~
/1=.,375
=b.
=O~
/1=.25
=b.
10% Trimmed r4ean
lJ2-lJl=0~
=.50
17
TABLE 3
The MOnte-Carlo experiment for
/I
Pr(CS)
.85w(x)
+ .15~(x/3).
of Observations
Total /I of
Observations
= M Ratio to RSS
on 'IT 3-
Sample Mean
.94
33
87
.66
=l\
.96
31
77
.59
=0, l\=.375
.91
60
155
.66
=6
.94
50
123
.53
=0, l\=.25
.91
115
297
.57
=1::.
.98
105
252
.48
.97
27
70
.78
=l\
.97
25
63
.70
=0, 1::.=.375
.94
46
118
.74
=1::.
.95
36
89
.56
=0, 1::.=.25
.88
89
233
.65
=1::.
.95
70
171
.47
].12-].11=0, l\=.50
10% Trimmed I"1ean
112-].11=o? 1::.=.50
ACKi~OULEDGEnEiH
Nr. Vernon Chinchilli programmed the Hmte-Car10 simulation and his
help is gratefully acknowledged.
18
REFERENCES
.ANSCOMBE~
F. J. (1952). Large sample theory of sequential estimation.
48~ 600-617.
Froa. Camb. PhiZ. Soa.
R. E. ~ KIEFER~ J. and SOBEL~ M. (1968). SequentiaZ Identifiaation
and Ranking Froaedures. University of Chicago Press.
BEC"iHOFER~
R. J. (1975). An asymptotic representation for M-estimators and
linear functions of order statistics. Institute of Statistics t&Umeo
Series #I006~ University of North Carolina at Chapel Hill.
~~~OLL~
~ (1976). On the Serfling-v1ackerly alternate a-pproach.
Institute
---o.....
f'S'I"::'t-a"::"'*tistics Mimeo Series #1052, University of North Carolina at Chapel
Hill.
, (1976). On Swanepoel's elimination procedures. Institute
---o..,..f....S""'"t-a""""tistics rfuneo Series #1074 ~ University of North Carolina at
Chapel Hill.
J. C. (1972). Nonparametric sequential procedures for selecting
the best of k populations. J. Am. Statist. Assoa. 67 3 614-616.
GEERTSE,1A~
ROBBInS? H.? SOBEL, 1,1. and Starr? N. (1968). A sequential procedure for
selecting the largest of k means. Ann. Math. Statist. 39, 88-92.
J. W. H. (1976). Nonparametric elimination selection procedures
based on robust estimators. Unpublished manuscript.
ffi1AJ~EL,
~
and
GEERTSE;~~
J. C. (1976). Sequential procedures with
---e~llTIlln"-'
,....· -ation for selecting the best of k nonnal populations. To appear
in S. Afr. Statist.
J.
WACKERLY ~ D. D. (1975). fm. alternative sequential approach to the problem
of selecting the best of k populations. University of Florida Technical
Report #91.