*
This research was supported in part by the u.s. Army Research
Office, Durham, under Contract No. DAHC04-71-C-0042.
1 Ph.D. dissertation written under the direction of Professor
N.L. Johnson
Reproduction in whole or in part is permitted
for any purpose of the
United States Government
SEQUENTIAL TESTS OF COMPOSITE HVPOTHESES*,l
I'Janak Chand
Department of Statistias
University of North Carolina at Chapel, HiU
Institute of Statistics Mimeo Series No. 848
October, 1972
NANAK CHAND. Sequential Tests of Composite Hypotheses.
direction of Professor NORMAN L. JOm~.::iCN.)
(Under the
Chapter II contains the derivation and properties of sequuntial
tests of composite hypotheses for families of distributions satisfying
certain conditions whIch are discussed in section 2.1.
among
k
~
3
Discrimination
composite hypotheses is considered in Chapter III.
Some of the cases in which none of the conditions of Chapter II is
satisfied have been studied in Chapter
IV.
Procedures in sections
4.2, 4.4, 4.8 and 4.9 give tests whose power functions as well as A.S·.N.
functions are independent of the nuisance parameters.
An outline of a
general procedure for obtaining such a test is contained in Chapter I.
Sequential procedures for analysis of variance under random and mixed
models have been developed in Chapter V.
We consider a composite
hypothesis in the general random model and derive a sequential test
through the principle of invariance.
Unlike the current spquential test
procedures for this problem, most of our tests reduce to the form in
which their properties can be studied by standard methods.
Exact
propertie~
of tests of composite hypotheses in some special
cases have been obtained in Chapter VI.
may be chosen
80
Tests of sections 6.2 and 6.3
as to achieve a desired strength
(0,0).
Procedures
to choose one of the two and one of the three hypotheses about the mean
of a rectangular distribution with
unkn~~n
ted in sections 6.4 and 6.5 respectively.
varlance have been construc-
TABLE OF
CONTENTS
PAGE
CHAPTER
v
ACKN~ILEDG&~TS
I
INTRODUCTION
1.1
Outline
1
1.2 A Sequential Analog of Stein's Two Stage Test
1.3 An Outline of a General Procedure for Deriving
a Sequential Test with Power Function Independent of the Nuisance Parameters
1.4 A Property of Maxi~um Likelihood Estimators of
Location and Scale Parameters
1.5 A Summary of the Results in Chapters II-VI
II
2
4
6
7
SEQUENTIAL TEST'S'
2.1 Determining the Decision
2.2 Strength of the Test '8'
Boundari~s
2.3 Operating
and Average Sample Number 14
Cha~acteristic
2.4 A Class of
11
Conti~uous Distributions
Function Satisfied (AI)
2.5
2.6
9
"~ose
Density
IS
Laplace Distribution: Inferences on the Location
Parameter When the Scale Parameter is Not Known
16
An Alternative (Approximate) Form of Results 1n 2.5 21
2.7
III
Inferences on the Mean of Inverse Gaussian
Distribution
2.8 . Testing a Parameter of Lognormal Distribution
23
2.9
28
Numerical Results
26
. DISCRIMINATION ~roNG k > 2 HYPOTHESES IN THE PRESENCE
OF NUISANCE PARAMETERS
3.1 Introduction
37
3.2 An Extension of a Sequential Discrimination
Procedu~e
37
3.3 Discriminating Three Hypotheses About a Normal
Mean. the }ariance Being Unknown
44
CllAl'TER
IV
PAGE
sm-IE 'rESTS WITH TUB USUAL DECISION nOmIDARIES
4.1
4.2
Introduction
56
A Test of Randomness with Fower Function and
A.S.N. Function Independent of the Nuisance
Parameter
57
4.3 Testing Scale Parameter of a Gamma Distribution
4.4 Testing the Scale Parameter of Laplace Distribution; the Power Function and A.S.N. Function
of the Test Being Independent of the Location
Parameter
4.5 Testing an Inverse Gaussian Parameter
4.6 Inferences on a Parameter of a Singly Truncated
Normal Distribution
T~ting Equality of Two Poisson Means
4.7
4.8 Inferences on the Shape Parameter of Pareto
DistributJon; the Power Function and A.S.N.
Function of the Test Being Independent of the
Parameter
4.9 Testing the Scale Parameter of Exponential
Distribution; the Power Function and the
Average Sample Number of the Test Being Independent of the Location Parameter
4.10 Numerical Results
Nui.an~e
V
59
60
63
64
66
68
70
72
SEQUEliTIAL TESTS FOR ANALYSIS OF VARIANCE UNDER RANDOM
AND MIXED MODELS
5.1
Introduction
83
5.2 A General Problem
83
5.3 A General Procedure for Deriving an Invariant
S.P.R.T.
5.4 Definition of Sequential x2- Test
5.5 Application of the X2-Test for Testing Hypotheses
About the Ratio of Variances of Two ltorm<ll
Populations
88
5.6 One Way Classification
92
5.7 The Randomizea Block Design
98
5.8 Testing Interaction Hypotheses in a Two-Way
Classification with Balanced Replications
84
86
100
5.9 The Two Stage Nested Design
103
5.10 Ranl10ttdzed Block Design (Tests Under Mixed Models) 104
iii
PAGE
CHAPTER
5.11
~o-Way Cla~stfication with
Replications
VI
(T~sts
Balanced
Under Mixed Models)
lOS
5.12 The Two Stage Nested Design (Tests Under
Mixed Models)
106
5.13 Numerical Evaluation and Comparison
101
SOME EXACT RESuLTS
6.1
6.2
110
Introduction
Testing Location Parameter of Exponen:ia1
Distribution
6.3 Testing a Parameter of Pareto Distribution
6.4 Testing the Mean of the Rectangular
Distribution
6.5 Choosing One of the Three Hypotheses about the
Mean of the Rectangular Distribution with
Unknown Variance
124
BIBLIOGRAPHY
121
iv
110
115
118
AC!C-iO;.JLEDCEMENTS
I am deeply
inc~bted
to Proicssor Nonan L. Johnson, not only
for suggesting tbe problem, but also for bis guidance and inspiration
throughout the course of this investigation. uis illuminating
~OmDents
and suggestions were a constant encouragement; it
invaluable
experience to have had the opportunity to
~lork
h~s
been
a~
under his direction.
I would also like to thank Professor Wassily Heeffding, the
Chairman of my Dissertation Committee, for his helpful suggestions and
also foe his encouragement during my graduate study in Chapel Hill.
thanks are also due to Professors I. M. Chakravarti, Gardea D. Simons,
and R. L. Davis
for their interest and suggestions as well as to the
facutty of the Departments of Statistics sad Mathem3tics for their
teaching.
Tbe financial support provided by the Department of Statistics
during my period of study and work at Chapel Hill is greatly appreciated.
Finally, I wish to thank Mrs. Mary Jeffcoat for har excellent
typing of the fin41
Kitchen and the
tlv~i't"
D'~pa r
manu~cr1pt
~ecretarial
and Mrs. Eloise Walker.
James O.
staff of the Departr~nt of Statistics for
generous h'!lp throughout my graduate study
Lment •
M~.
.~,:'111
t.,'ork 1n this
CHAPTER 1
INTRODUCTION
1.1.
Outline
Sequential t-test procedures ([6],[7],[11],[13],[16],[22],[25],[26],
[27],[32]) to discrim1nate between tvo possible values
the mean
~
Po
of a normal population with unknown variance
expressing the difference
~1
- Po
and
a
2
PI
of
necessitate
in (unknown) standard deviation units
or equivalently allowing the upper bound of the 'second kind of error
probability' to depend upon o.
Neither of these reformulations may be
completely satisfactory.
One solution to the problem is Stein's two stage test (Stein [29];
Hoshman [21]) in which a first stage sample of fixed size nO
to estimate a
2
i. taken
and a second sample of size depending upon this first
stage estimate is then taken, if necessary, to reach the terminal
decision.
A second solution, which may be regarded as a generalization to
Stein's procedure, is due to Baker [5].
procedure in somewhat modified form.
Hall [14] developed the same
A summary of the procedure is given
in section 1.2.
In this dissertation we shall study similar problems in a number of
different circumstances.
An outline of a fairly general procedure to
derive a sequential test with power function independent of nuisance
parameters is contained in section 1.3.
with some specific applications.
Later chapters are concerned
A general result of Antle and Baln [1]
on distribution of maximum likelihood estimators of location and scale
parameters, which we describe in section 1.4, is of some use in these
applications.
Section 1.5 contains a summary of the results of
. Chapters II-VI.
1. 2.
A Sequential Analog of Stein's Tlvo Stage Test
This section summarizes the work of Baker [5] and Hall [14].
Let
Xl' X
2
•••
be independent N(P,02,.
HO : P • Po va.
developed the following procedure:
native hypotheses are
The null and the alter-
HI: P • PI ; 0
>
.
Following Stein [29], a preliminary sample {xl' •• , x
O.
Baker [5]
\
nO
} of fixed
(>1) is taken to estimate 0 2 by s2 • (n _1)-1 ~ (x· _ i ' )2
nO
0
i-I
i
DO
-1 nO .
- '
I Xi) and sampling is then continued, ooe observation
(where x • nO
nO
i-I
at a time, until the sequential probability ratio test (S.P.R.T.), in
which s2
replaces 0 2 in the likelihood ratio decides one of the
nO
hypotheses to be true. In Stein's procedure, the size of the second
2
sample depends only upon s~ ,so its distribution depends only upon 0 ;
size
0
0
o
but in the above procedure, the distribution of the sample size depends
2
as well a8 upon a ;
however the power function is independent
as in Stein's procedure.
The usual termination boundaries are
modified so as to give the desired strength
probabilities) •
(a,S)
(bounds on the error
Since it is a conditional S.P.R.T., given
and
a
its behaviour was studied by taking expectations with respect to the random variable
S •
nO
Pr{accepting HOlp}
The following expressions were obtained:
2
~
eo
c(!v)2 t
2
r=O
if
h(ll) >
if
h(~)
{(-21V +
rh(~)(C
-v
+
D»2 -
(-2lV +
Cb(~) + rh(~)(C
+ D»2}.
0;
(1. 2.1)
•
E(nlll.o)
~
c
A(~ -
p)
-(v+g)
I {-D(rh(p)(C + D) +
r-O
i)
2
-(v+2)
+(C + D) (Ch(p) + rb(p)(C + D) + ~) 2
-(v+~
- C«r + l)b(p)(C
II
~l
a2(~)2
2
A(p -
+
D) +
~)
2
•
h(lJ) > 0
-(v+2)
I {Dei -
iU
}t . if
(r
r-O
+ l)h(~)(C + D» 2
-(v+2)
2
v
- (C + D)(2 - Dh(p) - rh(Il)(C + D»
-(v+2)
v
+ C('2 - rh(p)(C + D»
2
}t
(1.2.2)
if h(p) < 0
O' u· (PI + PO)/2, v· DO-l and
where C and D were used in place of In!:! and -10-1- respectively in
a
1-u
the Wald sequential probability ratio test and were deterBdned by the
where h(Il)· 2(u - Il)/A,
A· PI -
Il
following equations:
.
(~)2 I {(} + C + r(C + D»2 - (1 + (r + l)(C + D»2) • a
PO
v
-
v
1 '2 •
('2V) I
r-O
~
~
-v
-v
Hi + D + reC + D»2 - (1 + (r + 1He + D»2)
•
-
S
(1.2.3)
Hall [14] obtained expres8ions (1.2.1) and (1.2.2) but instead of
3
using C and D siven by (1.2.3). he used (folloWing Paulson [23]),
-2
C w ~(a v_I) - (-In a)ll + (-In a)/v + o(~)].
-2
D•
v_I) • (-In 8)[1 + (-In B)/v + o(~»)
i(e
(1.2.4)
Since the decision boundaries given by (1.2.4) correspond to conservative bounds (-In a, -In S) of Wald, in case
0
were known, a test
using C and D given by (J.• 2.4) will require, on the average, more observations
to reach a decision than the one using the decision boundaries
given by (1.2.3).
1.3.
An autUns of cz General Pz-oceduzte fo1' Deriving a Sequential.
PtNe1' Function Independent of thii Nuisance Parameters
~th
~e8t
Let Xl' X2 , ••• be independent with a cODlDOn probability density
g(x, 81 , 82).
We consider testing the hypothesiB 8 0 .: 8 1 - 810 against
HI 8 • 8
; 82 (possibly vector valued) is unknown.
(1.3.1)
1
11
...
Let &2 be an estimator of 82 obta!ned from a preliMinary sample
(independent of Xl' X2 , ••• ) of fixed size nO· If , j (8 2) ·
function (probability function in case of discrete Xj,S)
A
...
...
I(Xj
811 , 82)/g(Xj , 810 , 82).
the nth stage is:
,
j.
I, 2••• ,
then the test procedure at
(1.3.2)
B<
n
i.e., stop sampling and accept HO if
stop sa1Dpling and accept 81 if
A
n >'j(8 2) < I,
j-l
n
A
n y (8 ) > A,
j
2
j-1
otherwise continue sampling;
where A and B are constants chosen suitably to achieve the desired
4
strength
(a,B).
A
Asstlmin~ the
random variable
...
.
, 6 2)/g(X,
°
A
A
Z(6 }" In Y(e 2) (where
2
Y(8 ) •
2
".
g(X, 8 11
10 , 6 2» satisfies conditions (1 - IV) of Wald [31]
Lemma 2, for each fixed 8 ; then the equation
2
Ab
(1. 3, 3)
(y(8 » g(x, 8 a )dx. 1
2
r 2
...
has a unique nonzero solution h· b(e l , 92 , 8 ) for E Z(8 ) •
2
2
A
J
~
f
z(e 2)g(x, 81 , 8 2) dx
We assume:
O.
For E Z(6 2) • 0, the only solution is
.
b· O.
A
h depends upon (8 , 8 ) only through a function t · t(8 ,
2
2
2
(h • h(Ol' t) 8ay) where t is distributed independently of 62,
(Cl)
Let
'e
be the cumulative distribution function of
t.
Using Weld
I
tbeory and making usual assumptions about negligible boundary overlap, we
obtain,
L(e l , A, B) • Pr {SO 1s accepted IOI} •
f Le9 l ,
A, Ilt)dF
81
(t)
where L(a l , A, lIt) • Pr {HOis accepted lel,t}
h(8 l ,t)
b(8 1 ,t)
b(el,t)
• (A
l)/(A
- ·B
), for E
and L(e , A, Bit) • log A/Clog A + Ilog BI),
l
To achieve strength (a,B), A and I are to be obtained from
z(e2)~O
(1.3.4)
(1.3.5)
L(a lO ' A, B) • 1 - a ;
If, in addition, we assume:
2
A
(C2)
A
A
E Z(6 ) and E Z (8 ) depend upon (8 , 8 ) only through
1
1
2 2
...
t • t(8 , 8 ) ; tben the average sample number (A.S.N.) function of the
2 2
test will slso be independent of 8 and will be obtained from the formula
2
E (nI8 1) •
J
E (nIO l , t) dFOl(t)
5
A, Bit) + log A(l - L(6 , A, Bit»
1
.. -log-a
._. 1.(01'
-_._----:.
.- -----=----
E Z(6 )
1
(1.3.6)
.
.g(x , 6 , e ) possesses
j
1 2
j-l
the monotone likelihood ratio property in 6 , then by using Theorem 2
1
of Ghosh [12], we conclude that the above procedure also provides a soluIf for each fixed
1.4.
n
n
n,
6 and for each
2
A Property of Maximum Likelihood Estimators of Location and
ScaZe PaNmeters
.. , Xo have joint probability density function of the
~
t
xn - t
... xo
) . " g(
, •• ,
), t, n
e,
n"
-0
-
E:
a • {(t,n) : --
<
t
< -,
n
> O},
then Antle and Ba1n [1) have shown
that, under very general conditions, the maximum likelihood estimators
and
"'....
n have the property that
distributed independently of
n/n,
t
(t -
and
A
#It
and (t - t)/n
t)/n
...
t
are each
n.
We note that it is not essential that
Xl' •• , x be independent.
n
Thus it follows that under the procedure of section 1.3, the power
function as well as the A.S.N. function will be independent of the nuisance
parameter
6 2 when
t
i8 of the form
.
t l (a /e 2) in case 02
2
is the scale
A
parameter and also when
t
1s of the form
t 2(6 2 - 62)
in case
8
2 is
the location parameter.
The results apply to other types of parameters if they are
approp-
riately related to location and scale parameters under some change of
variable.
This includes, in partJ.cular, Gamma, Pareto and Weibull distri-
butions.
6
1. S.
A SwrrnarrJ of the Results -in Chapterc 11- vr
First we
cons1d~T
the problem (1.3.1).
Tn most situations, it is not
practicable to express (1.3.5) explicitly when (1.3.3) does not give an
explicit expression for
h.
It is easier to determine termination boun-
daries to achieve a desired strength
(a,a)
for families satisfying
: certain assumptions which will be discussed in Chapter II.
The numerical
results of that Chapter Show that in such cases we can achieve an approximate strength
(a,a> even when we use the usual termination boundaries.
Later chapters show that similar results hold rather more generally.
Chapter III is devoted to methods of choosing one of three (or more)
hypotheses when nuisance parameters are present.
In section 3.2, we
restrict ourselves to families of distributions satisfying the special
conditione similar to those discussed in Chapter II. in which case it is
possible to determine values of the decision constmlts so as to satisfy the
requirements on the operating characteristics while discriminating among
one of the
k hypotheses about a parameter.
In section 3.3, Sobel and
Wald's [28] method for discriminating among three possible values of a
normal mean has been extended to the case when the variance is unknown.
Some cases where none of the assumptions discussed in Chapter II are
satisfied are studied in Chapter IV.
been used to make terminal decisions.
The usual decision boundaries have
This includes cases of random samp-
ling fmmGamma, Laplace, inverse Gaussian. Pareto and exponential
butions.
distri-
Procedures in sections 4.2, 4.4. 4.8 and 4.9 give teats whose
power functions as well as A.S.N. functions are independent of nuisance
parameters.
Sequential procedures for analysis of variance under random and mixed
models are studied in Chapter V.
We consider a composite hypothesis in the
1
general random model and develop a sequential test through the principle
of invariance.
Then we apply this test to solve
c~rtajn
standard
prob-
lema in the one-way classificQtion, the randomized block design, the twoway classification with balanced replications and the two-stage nested
design.
We discuss similar procedures for three specific cases of the
mixed model.
The properties of the test have been compared with those of
the sequential F-test and the fixed sample F-test in section 5.13.
Exact results for some special cases have been obtained in Chapter VI.
Tests in 8ections 6.2 and 6.3 may be chosen to achieve a predetermined
strength
(a,O).
We study procedures for choosing one of two and one of
three hypotheses about the mean of a rectangular distribution in sections
6.3 and 6.4 respectively;
an estimator of the nuisance parameter though
haVing been obtained from a first
stage sample.
8
CHAPTER 11
SEQUENTIAL TEST'S'
2.1.
Determining the Decision Boundaries
Lest
Xl' X , ••• be independent and identically distributed with
2
probability density function (probability function in case of discrete
r.v.)
~(x,
81 , 8 ),
2
The null and the alternative hypotheses are
HO : 81 • 810 ys. HI: 81 • all' 82 being unknown •
...
An estimator 8
of 8 is obtained from a preliminary sample of
2
2
fixed size nO'
Let
Yj (8 ) · g(x , 811 , 62)/g(xj , 810 , 8 2)
2
j
for j • I, 2, •••
and let Zj(02)· In Yj (8 2)
Assuming for each
n
~
1 ,
.
function t • t(6 2 , 8 » where f 1s independent of nand Xj'B,
2
(2.1.1);
j • I, 2, .• , n
we define'S' to be the Sequential Probability Ratio Test to
discriminate B , i • 0, 1 vith termination boundaries A and 8, witt
replaced by
..
8
i
in the probability ratio.
A and 8 may be determinc~
2
fro. the desired error probabilities a and S, by the equations:
e2
Dfl(t)
J
(£1 (t)
e
dF(t)
! 0]
Cf1(t)
f
e
-Cfl(t)
J
+
e
dF(t) •
a
dF(t).
a
r
[f (t) > 0]
1
dF(t)
J
+
[f (t) ~ 0)
l
e
-Df (t)
1
(2.1.2)
[fl(t) > OJ
or
-Df (t) dF(t) • at
• Cf (t)
e
.J e-Df1(t) (eCf.1(t) e
(fl(t)
>
Cf 1 (t)
- •
-Df. 1 (t)
1)
dr(t)
1
- e
1
•a
0]
where C • 1n A.
D· -1n Band F i8 the cumulative distribution
82•
Assuming values of C and D satisfying (2.1.2) or (2.1.3) exist and
function of
t. assumed independent of
are positive, the decision rule of
S at the ntb stage 1. given by:
D
SO
if
stop s.-pling and accept Hl
if
stop-sampling and accept
n yj
j-1
10
(8 ) ~ B •
2
n
...
n Yj(8 2)
j-l
otherwise continue samplina
,.
~ At
2.2 Strength of the Test S
Theor~m
2.1 Lot Xl'
satisfying (AI).
x2 '
••• be i.i.d. with p.d.f.(p.f.) g(x, 81 , 8 2)
Let the sequential test S be defined by (2.1.4). If
A and B are determined by (2.1.2), then S
for testing the
has strength at lesst
(<<,S)
co~osite
hypothesis KO : 81 • 910 vs. Hl : 81 •
8 2 unknown. If A and B are determined by (2.1.3) and if the excess
a
..
cumulative sum t Zj(8 2) over the boundaries i. neglected, then S
j-l
strength (a,S) for testing the SaMe hypothesis.
811 ;
of the
has
.
.
For givea (8 2, 8 ), we consider the conditional S.P.R.T. S(8 2 , 8 2 )
2
for testing 8 • 8
against 8 • 8
with termination boundaries
1
10
1
11
I and i where
Proof:
Using conservative bounds of Wald on error probabilities, we have
..
.
Pr{S(8 2 , '2) accepts 81 1810' 82 , 82} < :1'
..
A
Pr{S(8 , 8 ) accepts H 18 , 8 , 8 }
o 11 2 2
2 2
<
'
B
(2.2.2)
Also, if we denote the associated nominal risks of the conditional
..
test S(9 2 , 92)
by «(t)
and
S(t),
11
we have, making the usual
assumption of negligible boundary overlap,
...
Hlle lO •
Pr{S(02' 02) acccpta
...
..
C2 ' 02} • aCt) ,
.
(2.2.3)
Pr{S(8 2 , 8 2) accepts Hoi ell' 82 , 82} • B(t)
where G(t)
and B(t)
satisfy
1 • (1 - 8(t»/a(t),
i · 8(t)/(1 -
aCt»~
(2.2.4)
exactly the same decision rule as S.
Thus we have,
...
...
...
I Prts(8 2, 8 Z) accepts Bi l8 l , 82 , 82} • E Pr{S accepts Bil8l, 82,
• Pr{S accepts Bi
bution of t.
l8 1 ,
62},
expectations being taken over the distri(2.2.5)
Let A and B be determined from (Z.1.2)
(i)
From (2.2.2) and (2.2.5), we have,
Dfl (t)
dF(t)
e
I
+
[fl(t) ~ OJ
J
[£1 (t) ~
cJF(t)
(il)
0.1
J
e
-Cfl(t)
dF(t)
•
Q
by (2.1.2)
(fl(t) > 0]
Cf1 (t)
e
8t
+
J
-D£l (t)
dF(t) • S by (2.1.2)
e
(f (t) > 0)
1
Let A and B be determined from (2.1.3)
From (2.2.3) and (2.2.5), we have
12
(2.2.6)
Also by (2.2.1) and (2.2.4), we have
-Df1 (t)
1 - e
aCt)· Cf (t)
-Df (t) •
1
1
e
- e
-Df
e
S(t)·
1
Cf (t)
1
e
Cf (t)
(t)
1
(e
- 1)
-Df (t)
i
1
- •
1f
fl(t)
>
0
whereas
Cf (t)
a(t)·
•
Cf
e
1
1
(t)
- 1
_.
-Df
Cfl(t)
•
8( t ) ·
Cf
e
1
1
(t)
•
-Df 1(t)
(1 - e
)
(t)
-Df (t)
- e
(2.2.7)
i
1
Thus. we have. by (2.2.6)
Pr{S accepts Bl'810. 82} •
-Df1(t)
efl (t)
. -.
e
I
[f (t)
1
Cf (c)
1
~
- 1
-Df (t)dP(t) +
1
0]
1 - e
I
e
Cf (c)
1
[fl(t) > 0]
by (2.1.3)
Similarly (2.1.3), (2.2.6) and (2.2.7)
Pr{S accepts HOl8l1, 8 2 } • 8
Thus S
haa strength
(a. S).
13
1~ly
- •
-Df (t)dl(t) •
1
Q
g(x, 'J
, (j2)
ll
Z(El ) · 1n (
Z
g x, I.,. 10' e 2)
Asswni.ng
sati~fif:::>
conditions (I - IV)
of Wald [31] Lemma 2. the equation
(
j
_'.2(6")
~
~
r(x. e l , 02)dx
I
c
(2.3.1)
)
has
3
unique nonzero solution
h
h(a , 6 ),
~
2
1
Neglecting excess of the
cumulative sum over the boundaries,
h
• (A - l)~A
• In A/(ln
and A.S.N. function of
h
A-
h
- B ) for
In
U)
0(02' 02)
for E Z(e ) • 0
2
is
Ii
E
(nle l , v2 ' u2 )
f
I
J
c
{(X -
E Z(02) ; 0 ,
h
1)1n B + (1 -
h
Chf1(t)
e
wChf (t)
1
e
- e
- 1
-Dhi;;...·-(t-)-.ridF(t)
1
h
B )In A}/(A - B )E
1 - e
Chf1(t)
+
e
Z(02)
for
-Dhf (t)
1
-
-Ohf (t)dFCt)
1
t'
(2.3.2)
(
E
(nle l , a2) •
for
E Z( 0")
.;.
I
J
Chf (t)
-Dh! 1 (t)
1
)rl(l) .+ C(l - e
)fl(t)
o(l - e
·
--dF(t)
. _ - - Chi (t)
-Ohi 1 (t)
l
(e
-
; 0,
14
e
)£
(2.3.3)
2.4. A ctaes of ContinuoUD
Sa.tillfislI (AlJ
~8tributions
~ity
Nhoso
FUnction
We consider the class of Subbotin distributions with p.d.f.
2
P6(xIG, .) •
[~ r<l
" 1
6
+ l»)-l.-lexp[_llx ; el];
at.
> 0;
" beins
known. ([19], Vol. 2. p. 33)
It i8 de8ired to di8criainate between Hi
•
8· 8t ,
beiDl unknown.
i · 0, 1 ;
2
eI
j.
I
1
In the notation of section 2.1., Zj(+) • ~{ x -
2
; Zj<+)/; Zj<+) •
j-1
j-1
0
_
2
<+I.)~ - t i .
Thus (Al) i8 satisfied with
t·
t/••
2
f1(t) -
t a.
By Antle and Bain (11, t
is distributed independently of 8
and ••
The decision rule of the test i.
2
11
.. 0) : -D
<
~t
-8
X
1
2
x
-e i
{I:.t.;.:o.J
- I~
}< C
+ I
•
j-1
(H1)
where C and Dare
given by (2.1.2) or (2.1.3)
In the notation of section (2.3), L(e •• ) and E (nle •• ) are given by
the right 8ides of (2.3.2) and (2.3.3) respectively where h ... h(e, .) is
deterained froa
6
-1 -1
[2y1"
r(l + 1)].
I
•
--
222
4
'6
X
81
- 8,
eOI - ! - • 1 - I --.-exp[~.
]dx. 1
hlx -
4
15
hlX -
11
2. S.
LapZace Distribution: In!cY'el1acB on the Location Paztametel' When
the Saale Parameter is Not Knoum
Xl' X2 , •• , are independent with a COMmOn p.d.f.
(2.S.1)
HO : 81 - 810 against H1 : 81 -ell'
The hypothesis to be tested is
8 > O.
2
The density belongs to the class of section 2.4 with
e•
• • 8 2/2
81 ,
a.
2,
,.
If we take
e2
as the maximum likelihood estimator of 8 2 given by
no
.} ' t h en
{ xl' •• , xn..
a fi rst stage samp 1 cs
u
median of thia sample.
Let
Ix t'
:2.
v , .or
i-l
no • 2m + 1, m being a
I
-
xI, ..x b e i ug
t he
DO
positive integer •
,•
n
n,._
t Zj(8 2)1 t Zj(e 2) - 8 /8 • f (t) • t.
1
2 2
j-l
j-l
It
(8 ) : -D < t {
o
j-l
The decision rule of the test i.
IXj,.- e10 I 8
2
The probability density function of
t
is independent of
8
1
and 8 2
and is given by Karst and Po1owy [20] aa
C - (m
1
+ 1)/(m - i + 1) and a i and b 1 are obtained fro.
2~'m
m(e) • - J : .
n
1
r
2 0 r-O.1(m - r)r(1 - 8)2m-r n (m + 1 + j)[1 - (m - j)8/m + j + 1]
~·O
16
•
(2.5.2)
aD -Ct
Oe p(t)dt
(2.1.2) implies
c
a,
faD -Dt
O.
p(t)dt. 6
f
After integration and simplification, this gives
(2.5.3)
(2.1.3) implies
- I - e-Dt
Ct
_DtP(t)dt •
e
-.
Io
GO
-Dt
•
a,
[
e
Ct
Ct
(e
- e
- 1)
-Dt- p
(t)dt • B
Expanding the der.ominator in series, applying Fubini's Theorem, integrating and simplifying,
2m
i
By
T~eorem
fa)
I ain I {(nO + C + r(C + D»
i-I Or-O
minimum strength
-i
i
- (nO + (r + l)(C + D»- } +
2.1, decision boundaries given by (2.5.3) will ensure a
{a,B> while those determined from (2.5.4) will give a
test with approximate strength
(a,B), approximation being due to the
excess over the boundaries.
11
The O[)(Jpating Chr;nYI:.:·terisUa
After integration and simp11fication, (2.3.1) implies:
(1)
2
for
h
--
-v
w 2h _ Ie
<
61 ~ 610 ~ 611 < • ,
h
u-v
811 - 810
- w + 1 - 2h _ Ie
• 0, where u •
e
2
, w· e hu
2(1
w
h
- 2h + Ie
h
-u+v
w(2h + 1)e
W1th
h
v) _ w +
'
(2.5.5)
h
2h + Ie
-(u-v). 0
w
-2hv
h w -v
(2h - 1)(2h + l)e
+ 2h _ Ie - 1 • 0
(2.5.6)
(2.5.7)
determined from (2.5.5) - (2.5.7) and with C and D given by
(2.5.3) or (2.5.4), we have, by (2.3.2), in notation of section 2.3, the
following value of the operatins characteristic:
m
t b e
i-I i
-n tC
0 i}dt
Expanding the denominator in series, applying Fubini's Theorem twice,
integrating and simplifying, we have,
18
for
(i)
+
m
t biD
aD
t
OpO
i-I
(ii)
h > 0,
{(nOC i + hr(C + til)
for
(iii)
h· 0,
for
h
<
-1
- (nOC
E Zea2) • 0,
L(Ol' 9 )
2
-1
= C/(C +
}
(2.5.8)
D)
(2.5.9)
0,
~
(no - her + 1)(C + D»-i} +
i=1
(DOC i - her + 1)(C + D»
-1
biDO ; {(nOC - hD - hr(C + D»-1 reO
i
(2.5.10)
}
Ths
A.S.N. Functi.on
To obtain A.S.N. from (2.3.3),
~e~l-lI:.IIX -
i + hC + hr(C + D»
W~
need E
Z(6 ) •
2
610 1 - !x - 611 !lexpl-!x - 611/e2ldx. Aftet integtation
and simplification, we obtain
E Z(a ) • -u + eu-v - e-v where u and v are as in ( 2.5.5)
2
(2.5.11)
E
z(a. 2)
E Z(02)
• u+ e
n
-u+v
u - 2v
+ e
v
(2.5.12)
- e
-u+v
- e
-v
19
(2.S.13)
~ (2023)-1~~IX - °10 '
We also need E Z2(02)
Ix -
el11>2exp{-lx - e1IJ62}dx.
-
After integration and simplification, we
have
for
(i)
e1 -< e10 -< e11
<
_w
<
~
•
2
-y
U-y
2
E Z (6 2) • -2(u + 2)e - 2(u - 2)e
+ u
for
(11)
_QO
e10
<
~
2
e11
-u+v
E Z (6 2) • -2(u + 2)e
for
(11i)
~
<
0
<
10 -
0
<
QI)
v
<
E Z (8 2) • -2(u + 2)e
e1
2
- 2(u - 2)e + u
1-
-u+v
2
~
(2.5.14)
e11
<
(2.5.15)
QI)
+ (u - 2v)
2
+ 8 - 2(u + 2)e
-v
(2.5.16)
2
E Z(02) and
E Z (9 2) given by (2.5.5) - (2.5.7). (2.5.11) (2.5.13) and (2.5.14) - (2.5.16) respectively and with C and D determined
With h.
(2.5.~).
from (2.5.3) or
(2.3.3) gives the following expression for the
A. S.N. function:
E
(nle • e )
1
and
E (n
D
2
~l'
as in (2.5.2).
1
~
Cht
-Dht
Dt(! - e
) + etC! - e
)p(t)dt
(eCht _ e-Dht)E Z(9 )
0:).
c:
E Z (6 )
2
1~t2p(t)dt
2for
for
E
z(a
2
)
E Z(02) • O. where
~ 0
pet)
is
o
Expanding the denominator in series (for the case
E Z(a ) ; 0), applying Fubini's Theorem. integrating and rearranging the
2
terms. we obtain:
(1)
for
h > 0.
20
(C + D) (nO + Ch +r(C + D)h) -(1+1) - C(n + (C + D) (r + 1)}V-U+1)} .".
O
(2.5.17)
(11)
for
h· 0
(2.5.18)
(lli)
for
h < O.
+ C(n - hr(C + D»-(1+1) + D(n - her + l)(C + D»-(1+1)} +
O
O
(2.5.19)
2.6.
An Atternative
(APPro3:imatl~)
Form of the Results in 2.5.
The numerical evaluation of operating characteristic
sample
nu~er
1, 2•••• 2m;
of the test S
and average
in section 2.5. amounts to finding
b l , i • 1, 2. '"
ai' i •
m from the relation (2.5.2), then solving
(2.5.3) or (2.5.4) for C and D and summing the infinite series on the right
sides of (2.5.8). (2.5.10) and (2.5.17) - (2.5.19). The first two steps are
21
laborious for large
nO' even for
approximate distribution of
Antle [4] have found that
t
2not
00 > 5.
°..
= 2/° 2 in plQce of (2.5.2). Baln and
2
is approximately a X2n _1(0) variable,
o
the approximation is good for small nO
result for large
nO.
It is convenient to use an
and approaches the asymptotic
Thus
(2n -l)/2 (2n -l)/2 - 1
2n - 1
O
O
O
t
exp(-nOt»/r(
2
);
pet) .. (nO
t >
0 (2.6.1)
After integration, solutionof (2.1.2) may now be written as:
D - na(B
-2/(2n -1)
0
- 1)
(2.6.2)
(2.1.3) along with (2.6.1) gives after expanding the denominator in
series, applying Fubini's Theorem, integrating and simplifying,
GO
n(200-1)/2 t {(C + n + r(C + D»
o
raO
-(2n -1)/2
0
-
0
.
-(2n -1)/2
(nO + (r + 1)(C + D»
0
} • a ,
(2nO-l)/2 •
-(2nO-1)/2
nO
I {(D + nO + r(C + D»
1'8'0
(nO
+ (r + l)(C + D»
With
h
-(2n -1)/2
0
} • B
(2.6.3)
given by (2.5.5) - (2.5.7), we have, from (2.3.2) and (2.6.1),
after expanding suitably, applying Fub1ni's Theorem, integrating and
simplifying,
(1)
for
h
> 0,
(2n -l)/2 GO
-(2n -l)/2
o
O
L(e l , 92) • nO
t {CoO + r(C + D)h)
r-O
-(2n -1) /2
(nO + Cb + r(C + D)h)
0
},
22
(2.6.4)
(ii)
(iii)
L(Ol' 6 2) = C/(C + D)
for h· 0.
for
h
~
(2.6.5)
0,
(2nO-l)/2 •
L(e , 6 ) - n
l
2
o
t {en
r-O
(nO - (r + l)(C + D)h)
0
- Dh -
r(C + D)h)
-(2n -1)/2
0
-
-(2n -1)/2
0
}
(2.6.6)
From (2.3.3) and (2.6.1), we obtain, by similar
ope~at1ons,
for h > 0'(2n -1)/2
o
nO
(2nO - 1) •
E (n16 1 , 62) 2E Z(e )
t {(C + D) (nO + Ch +
2
r-O
-(2n +1)/2
-(2n +1)/2
r(C + D)h)
0
- D(n + r(C + D)h)
0
O
(i)
C(n + (r + l)(C + D)
O
-(2n +1)/2
0
}
(2.6.7)
(2.6.8)
(i1i)
for h
0,
(2nO-1)/2
nO
(2nO - 1) •
-(2n +1)/2
E (nlo 1 , 82) •
2E Z(O )
t {C(nO -rh(C + D»
0
+
2
r-O
-(2n +1)/2
-(2n +1)/2
D(n - (r + 1)(C + D)h)
0
- (C + D) (no - Dh -r(C + D)h)
0
}
o
(2.6.9)
2.7.
<
Inferenaes on the Mt1an of Inverse GausiJian Distribution
Let Xl' X2 , •••
be independent with a common p.d.f.
I
3
px(xl~, A) • (2wx A)
-'2
2
2
exp-(x - ~) /2A~ x;
x
>
0;
A> 0 •
(2.7.1)
1I > 0
The null and the alternative hypotheses are
23
(2.7.2)
{xIt
The first stage B.~~le
.• ,
X
...
determines A, the maximum
}
DO
...
-1 °O( -1
likelihood estimator of A as A
... n
t x - x--1 ) where
o 1-1
(>./0 ) x~ _1(0) (Tweedie [30]).
0
and ).
is distributed as
A-1{ 1 _
~(_1 + ....!)} (-! - -1:.).
2'll1
lI O
III
1
Zj(A).
o
Thus (AI) is satisfied with
).10
ft(t) -
t •
...
All.
The p.d.f. of
is independent of .).
t
°0- 1 °0- 3 _Dot
n -- n -1
p(t).(~)2 t 2 e 2/ r ( 0 2 ) ;
and P
and 1s given by
t>O
(2.7.3)
The decision rule of the teat for testing p . Po
Do
.
2
h(x - PO)
--"-2--- }dx,
1-1
o
exp[).
(2.7.4)
-l
[
(2dx 3) 2exp-(2Ax) -lc (x - 1-1)
2
+
h
{ij + P
h(x _ P )2
· 2 2
0
1-1
I_
1-1 1
after some rearrangements. the right side can be written as
1
-1 1
is
...
t Zj().) < C : (HI)
j-1
-D <
E ehZ ().) _
against 1J. PI
1
h
1
h
h
2)] [
- ;- (2"
+2
- 2)
PI
lOp
1-1 0
3
0
(21r6 x)
2
-2exp{-(x-8
2
1
2
) 12x8 6 }dx.
l 2
1
h-2 ,
&1· (:2 + :2 - ~)
where
1
h
P
PI
Po
62 '" A.
Thus, after simplification (2.3.1)
implies
h •
(!... +.!... - £)/(1- _ L)
PI
Since b
Po
lJ
PI
(2.7.5)
Po
is a decreasing function of
P, power function of the
conditional test with decision rule (2.7.4) increases as
P
increases,
#0
for each fixed
>". Thus (2.7.4) also provides a solution to the problem
24
(2.7.2) •
Sinr:e
E
EX·
ZeAl •
>..
2
3·2
EX"'}l).
).I,
-1 II 1
{.c.(_
1
+ -)
-
2}.10).l1
+
lJ
.
"le obt.1.'n,
;
aft(~t
simplifying,
1
1
1}(- - - )
(2.7.6)
"'O}.ll '
(2.7.7)
We may obtain (2.7.3) by replacing
of (2.6.1).
nO
by
n /2
O
on the right side
Thus by (2.6.2) and (2.6.3),
(2.7.8);
(n -1) /2
(n /2)
OD
t {(C + (n /2) + r(C + D»
0
o
raO
°
(n -1)/2
0
I {(C + (nO/2) + r(C + D})
raO
The form of
of
for
L(p) •
+
(i1i)
-
° }• s
(2.7.9)
implies that operating
no-I
----...
characte~istie
is independent
From section 2.6:
for
for
n
2 t {(20 + rh(C + D»
r-O
(2°)
Ch + rh(C
(ii)
0
h > 0,
n
n0
h
-en -1)/2
-en -1)/2
so, we denote it by L(lJ).
>..;
(1)
(2
.}. a ,
CD
«no/2) + (r + l)(C + D»
-
-en -1)/2
o
O/2)
O·
o
«n /2) + (r + l)(C + D»
(n
-en -1) 12
+ D»
-en -1)/2
° }
h ... 0,
h
<
L(p) '"'
-en -1)/2
0
(2.7.10)
C/(C + D)
0,
25
(2.7.11)
n -1
nO
o
(-)
2
~1lO
nO
E {(..r= 0 2
nO
(-- - (r + l)h(C + D»
-en -1)/2
0
2
for
+ D) (20 + Ch + rh(C + D»
-en +1)/2
0
}
h=O,
,).
0
(2.7.13)
(2.7.14)
h < 0,
(n
E (nl
-en +1) /2
n
co
-(nO+1)/2
nO
- C(~ + (r + l)h(C + D»
nO
-D(]: + rh(C + D»
(iii)
-
(2.7.12)
t {(C
r"'O
for
0
}
1)
(il)
-1)/2
(n
- Dh - rh(e + D»-
O/2)
1
nO-2-
(nO - 1)
2E
lJ, "
zo.)
r:O
-en +1)/2
nO
D(-- (r + 1)h(C + D»
2
0
nO
~ {C(-- - rh(C + D»
co
-en +1)/2
2
0
nO
- (C + D)(-- Dh - rh(C + D»
2
+
-en +1)/2
0
}
(2.7.15)
2.8.
Testing a Parameter of LognomaZ Dis tribution
The p.d.f. of X is
1
g(x,
l;
t
0)
-2
= (21')
(ox)
-1
-1 -2
2
exp-2 (1 (In x - t ) ;
x > 0
(2.8.1)
The problem is to test
HO : ~ ~
to vs.
H1 : t; ~
t 1;
r; 1 > to;
The maximum likelihood estimator a
sample of size
nO
ls c
26
of
(1
(1
> 0
(2.8.2)
given by the first stage
1 .. 1, 2 , .., nO •
Zj(O) •
-2
0
(log x j -
is satisfied with
,= '0
testing
t1
+ Co
2
)(C 1 - '0)'
"'2
2
The decision rule of the test for
fl(t}. t • 0 /0.
against
'D '1
Thus we easily see that (AI)
is
(2.8.3)
Corresponding to (2.3.1), we have, after integration and simp1ification;
(2.8.4)
Also, it is easily seen that
E Z(o)
a
2
= a -4 (t 1 -
E Z (0)
t
(1;1 - r;O)(~ -
to)
2
+ r;
1 2 0)0- 2
{o
2
+ (C -
(2.8.5) ,
'1 + Co 2
2
(2.8.6)
)}
(2.8.4) shows that the power of the test is an increasing function of
C and is independent of o.
Thus (2.8.3) will also be a test for the
problem (2.8.2).
Since
t
is distributed as
given by (2.7.3).
-1 2
nO X _1(0), the p.d.f. of
no
t
is
Thus the operating characteristic and the A.S.N. are
given by (2.7.10) - (2.7.12) and (2.7.13) - (2.7.15) respectively, with C
and D determined from (2.7.8) or (2.7.9). with
h. E Z(o) and
I Z2(0)
given by (2.8.4), (2.8.5) and (2.8.6) respectively and with L.R.S.'s of
(2.7.10) - (2.7.12) and (2.7.13) - (2.7.15) replaced by
E (nit, a) respectively.
27
L(C) and
2.9.
Nwnel'icaZ Results
TFlbll'!': 2.9.1 - 2.9. 6 f(lllo,,~.
the test S
The notation used is:
of
when adjusted decision boundaries are used, adjustment being
made according to the formula (2,1.2);
the usual
OC(l)'" OC
dec~sion
OC(2)· OC
of the test which uses
boundaries, i.e., C • In (1 - S)/a"
D· -In S/(l - a);
. OC(3) • OC when the second parameter is known.
OC()
and A.S.N.()
in Tables 2.9.1 - 2.9.6 were obtained from the
standard approximate formulae for OC and A.S.N. functions of the Wald
S.P.R.T.
OC(1) - OC(2) in Tables 2.9.1 and 2.9.3 were calculated from
(2.6.4) - (2.6.6) and (2.7.10) - (2.7.12) respectively whereas; A.S.N'(l) A.S.N'(2) of Tables 2.9.2 and 2.9.4 were calculated from (2.6.7) - (2.6.9)
and (2.7.13) - (2.7.15) respectively.
section 2.8;
Using h, E Z
and E Z2
of
OC(l) - OC(2) of Table 2.9.5 and A.S.N'(l) - A.S.N'(2) of
Table 2.6.9 were also obtained from (2.7.10) - (2.7.12) and (2.7.13) (2.7.15) respectively.
Tables 2.9.1, 2.9.3 and 2.9.5 show that with unadjusted boundaries,
we get probabilities of both kinds of error slightly more than the desired
error probabilities.
According to Tables 2.9.2, 2.9.4 and 2.9.6, the
expected sample size in case of unadjusted boundaries is ahrays less than
that in case of adjusted boundaries and even less than that of the case
when the second parameter is known, in most cases.
Thus, depending upon
the practical situation, it may even be better to use the usual decision
boundaries than the adjusted ones.
cases:
The results agree in all the three
Laplace, Inverse Gaussian and Lognormal.
A comparison of Tables
2.9.3 (i) and 2.9.3 (ii) shows that an increased initial sample size tends
to decrease probabilities of both kinds of error in case of usual decision
boundaries.
28
e
e
e
Table
2.~l
Values of Operating Characteristic of S for Testing Location Parameter of
Laplace Distribution (nO • 30, a • B .05)
u • .5
u • 1
v
OC(2)
OC(l)
OCCl)
OC(3)
OC(2)
OK
v
0.7
0.6
0.5
O.~
N
\D
-
0.3
0.2
0.1
0
0.1
0.2
-
0.9926
0.982
0.953
0.869
0.658
0.342
0.131
0.047
0.018
0.0074
0.9893
0.9758
0.9414
0.852
0.646
0.354
0.148
0.0586
0.0242
0.0107
0.9931
0.9818
0.95
0.86
0.649
0.351
0.14
0.05
0.0182
0.0069
0
--0.2
0.4
aC(I)
OC(2)
acO)
0.9889
0.9781
0.953
0.884
0.678
0.322
0.116
0.047
0.0219
0.0111
O.98L~4
0.9709
0.9414
0.866
0.664
0.336
0.114
0.0586
0.0291
0.0156
0.9892
0.9775
0.95
0.875
0.667
0.333
0.125
0.05
1.4
1.2
1
0.8
0.6
0.4
0.2
0.9<}1
0.9804
0.953
0.875
0.664
0.336
0.125
0.047
0.0196
0.009
2.1
1.8
1.5
1.2
0.9
0.6
0.3
0
0.3
0.6
-
OCCl)
OC(2)
OC(3)
v
0.9898
0.9857
0.9722
0.9414
0.862
0.658
0.342
0.138
0.0586
0.0278
0.0143
0.9902
0.9787
0.95
0.87
0.661
0.339
0.13
0.05
0.0213
0.0098
2.8
2.4
2
1.6
1.2
0.8
0.4
0
0.4
0.8
0.9791
0.9 S3
0.879
0.671
0.329
0.121
0.047
0.0209
0.0102
0.9872
0.9738
0.!>414
0.857
0.652
0.348
0.1 113
0.0586
0.0262
0.01?8
0.9915
0.980:
O.gS
0.865
0 ••:) 5 5
0.345
0.135
0.05
0.019'3
0.008:'1
u ... 2
u • 1.5
v
OC()
-
-
O.O22~
0.0108
-
e
v
3.5
3
2.5
2
1.5
1
0.5
0
0.5
-
1
aC(l)
u • 2.5
OC(2)
0.9881
0.9773
0.953
0.888
0.68S
0.315
0.112
0.047
0.0227
0.0119
0.9835
0.9699
0.9414
0.87
0.671
0.329
0.13
0.0586
0.0301
0.0165
e
aC(3)
v
aC(l)
0.9884
0.9767
0.95
0.879
0.674
0.326
0.121
0.05
0.0233
0.0116
4.2
3.6
3
2.4
1.8
1.2
0.6
0
-0.6
1.2
0.9876
0.9766
0.953
0.891
0.692
0.308
0.109
0.047
-
0.0231~
0.0124
u • 3
OC(2)
0.9827
0.9692
0.9414
0.874
0.678
0.32 :2
0.126
0.0586
0.0308
0.0173
OC(3)
O.ge7S
0.975
0.95
0.883
0.68:
0.319
0.117
0.05
0.02lJ
0.0122
w
0
u
u • 3.5
v
4.9
4.2
3.5
2.8
2.1
1.4
0.7
0
0.7
1.4
OC(l)
0.9871
0.9761
0.953
0.894
0.699
0.301
0.106
0.047
0.0239
0.0129
OC(2)
ac()
v
ac el )
0.9822
0.9685
0.9414
0.877
0.685
0.315
0.123
0.0586
0.0315
0.0178
0.9873
0.9754
0.95
0.886
0.688
0.312
0.114
0.05
0.0246
0.0127
5.6
4.8
4
3.2
2.4
1.6
0.8
0
0.8
-1.6
0.9868
0.9757
0.953
0.897
0.707
0.293
0.103
0.047
0.0243
0.0132
-
D
4
OC(2)
O.9R17
0.968
0.9414
0.88
0.692
0.308
0.12
0.0586
0.032
0.0183
OC(3)
0.9869
0.975
0.95
0.883
0.695
0.304
0.111
0.05
0.025
0.0131
-
e
e
5
OC(2)
OC()
v
OC(l)
0.9811
0.9673
0.9q14
0.885
0.706
0.294
0.115
0.0586
0.0327
0.0189
0.9864
0.9743
0,95
0.894
0.71
0.29
0.106
0.05
0.0257
0.0136
14
12
10
0.9852
0.9739
0.953
0.9114
0.781
0.219
0.0886
0.047
0.0261
0.0149
u·
v
OC(!)
7
6
5
4
3
2
1
0
1
-2
0.9862
0.9751
0.953
0.9012
0.772
0.278
0.0988
0.047
0.0249
0.0138
w
8
6
4
2
0
-2
-4
u • 10
OC(2)
0.9797
0.9658
0.9414
0.896
0.763
0.237
0.104
0.0586
0.0342
0.0203
OC()
0.9852
0.9729
0.95
0.9045
0.769
0.231
0.0955
0.05
0.0271
0.0148
~
Average Sample Number of S
Distribution
v
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
-0.1
0.2
-
u •
ASN(l)
17.5
21.21
27.01
36.26
46.53
46.53
26.26
27.07
21. 21
17.5
Table 2.9.2
for Testing Location Parameter of Laplace
(nO • 30 t
(J •
B • • 05)
.5
ASN(2)
15.98
19.25
24.26
31.72
39.48
39.48
31.72
24.26
19.25
15.98
v
ASH()
16.33
19.71
24.88
32.36
39.64
39.64
32.36
24.88
19.71
16.31
1.4
1.2
1
0.8
0.6
0.4
0.2
0
0.2
0.4
--
ASN(l)
u • 1
ASH (2)
5.39
6.31
7.84
10.43
13.6
13.6"
10.43
7.84
6.31
5.39
4.91
5.72
7.03
9.14
11.55
11.55
9.14
7.03
5.72
4.91
ASN()
5.02
5.86
7.2
9.33
11.6
11.6
9.33
7.2
5.86
5.02
e
e
e
u •
v
2.1
1.8
1.5
1.2
0.9
0.6
0.3
0
-0.3
0.6
loS
ASN(l)
ASN(2)
ASN(3)
2.88
3.28
3.99
5.29
7.04
7.04
5.29
3.99
3.28
2.88
2.63
2.98
3.57
4.64
5.98
5.98
4.64
3.57
2.98
2.63
2.69
3.05
3.66
4.74
6.01
6.01
4.74
3.66
3.05
2.69
u • 2
v
ASN(l)
ASN(2)
ASN(3)
2.8
2.4
2
1.6
1.2
0.8
0.4
1.92
2.13
2.54
3.37
4.58
4.58
3.37
2.54
2.13
1.92
1. 75
1.93
2.28
2.95
3.89
3.89
2.95
2.28
1.93
1.75
1.79
1.98
2.33
3.02
3.91
3.91
3.01
2.33
ASH(l)
ASH(2)
l.SN(3)
1.14
1.22
1.41
1.87
2.69
2.69
1.87
1.41
1.22
-.1.14
1.03
1.1
1.26
1.64
2.29
2.29
1.64
1.26
1.1
1.03
1.06
a
-0.4
-0.8
1. 98
1.79
w
N
v
3.5
3
2.5
2
l.S
1
0.5
-0.5
1
0
2.5
ASN(2)
ASN(3)
1.3
1. IH
1.63
2.12
2.87
2.87
2.12
1.63
1.41
1.3
1.33
1.44
1.67
2.17
2.89
2.89
2.17
1.67
1.44
1.33
..,
U -
ASH(1)
1.43
1.56
1.82
2.42
3.38
3.38
2.42
1.82
1. 56
1.43
4.2
3.6
3
2.4
1.8
1.2
0.6
-
0
0.6
-1.2
u • 3
1.13
1. 29
1.63
2.3
2.3
1.68
1. 29
1.13
1.06
-
-
Table
OC Function of
lIO
lI1
• 2t
OC(2)
0.8
0.9
0.9963
0.9853
0.9534
0.879
0.744
0.562
0.387
0.256
0.171
0.118
0.0837
0.OtH5
0.0466
0.0363
0.0289
0.9936
0.9777
0.9372
0.854
0.72
0.555
0.399
0.28
0.197
0.142
0.1054
0.0804
0.0628
0.0503
0.0411
1.4
1.S
1.6
1.7
1.8
1.9
2
2.1
2.2
2.9.3 (1)
for Testing the Inverse Gaussian Mean
OC(l)
1
w
• 1t
S
II
1.1
1.2
1.3
(,oJ
e
0.9972
0.986
0.95
0.867
0.727
0.556
0.396
0.273
0.187
0.13
0.0919
0.067
0.05
0.0382
0.0299
B • .05)
"'0 • .5 t
loll - 2.S,
II
OC(l)
OC(2)
oe O )
0.1
0.3
0.5
0.7
0.9
1.1
1.3
1.5
1.7
1.9
2.1
2.3
2.5
2.7
2.9
0.9992
0.9534
0.714
0.411
0.239
0.156
0.112
0.0866
0.0707
·0.0599
0.0523
0.0466
0.0423
0.0389
1
0.9984
0.9372
0.693
0.421
0.263
0.181
0.136
0.109
0.0908
0.0785
0.0696
0.0628
0.0576
0.0535
1
0.9996
0.95
0.699
0.419
0.255
0.17
0.123
0.0952
0.0773
0.0651
0.0554
0.05
0.0451
0.0412
nO • 40
OC(3)
(a •
1
nO • 40
"0 • 1.S,
11
w
~
e
e
e
1
1.25
1.5
1.75
2
2.25
2.5
2.75
3
3.25
3.5
3.75
I~
4.25
4.5
"I • 4,
OC(l)
0.9991
0.9911
0.9534
0.846
0.656
0.446
0.289
0.192
0.133
0.0966
0.0733
0.0576
0.0466
0.0387
0.0328
OC(2)
0.9982
0.9857
0.9372
0.821
0.639
0.452
0.31
0.217
0.158
0.1196
0.0937
0.0758
0.0628
0.0533
0.046
nO • 40
OC(3)
0.9995
0.9921
0.95
0.832
0.643
0.451
0.305
0.208
0.146
0.1063
0.0802
0.0625
0.05
0.041
0.0343
Table
1 - 2.5,
nO • 60
OC(2)
OC(3)
1
1
0.9994
0.953
0.7099
0.4134
0.2432
0.159
0.999
0.9414
0.6948
0.4203
0.2604
0.1776
1
0.9996
0.95
0.6987
0.4189
0.2553
0.17
lJO • .5,
lJ
0.1
0.3
0.5
0.7
0.9
1.1
1.3
OC CI)
1J
"0 • 2,
1.1
1.4
1.7
2
2.3
2.6
2.9
3.2
3.5
3.8
4.1
4.4
4.7
5
5.3
5.6
2.9.3 (ii)
OCCl)
0.9987
0.9898
0.9534
0.855
0.677
0.472
0.309
0.204
0.14
0.101
0.0753
0.0583
0.0465
0.0382
0.032
110 • .5,
11 1
• 5,
OC(2)
0.9992
0.9908
0.95
0.81..1
0.664
0.4:5
0.324
0.2?1
0.154
0.111
0.0825
0.0633
0.05
0.01l04
0.0334
\.11 • 2.5,
n 0 • 60
OC(I)
OC(2)
1.5
0.1144
0.0684
0.0719
0.069 7
0.0528
0.047
0.0425
0.1318
0.1042
0.0864
0.0741
0.0652
0.0586
0.0534
2.1
2.3
2.5
2.7
OC(3)
0.9975
0.9839
0.9372
0.829
0.659
0.475
0.329
0.23
0.166
0.124
0.096
0.0757
0.0628
0.0527
0.045
II
1.1
1.9
nO • 40
OC
O)
0.1232
O.0~52
0.0773
0.0651
0.0564
0.05
0.0451
Table
ASH Function of the Test
-1
~
~-lx
w
Vt
e
e
e
0.8
0.9
1
1.1
1.2
1.3
1.4
1.S
1.6
1.7
1.8
1.9
2
2.1
2.2
~ ~-1 )(
\I
1.4
1.7
2
2.3
2.6
2.9
3.2
x ASN(i);
~O • 1,
ASH(l)
16.1
19.45
23.82
28.8
32.87
33.7
30.8
26.3
22
19.48
15.73
13.59
11. 91
10.57
9.48
ASN(2)
14.21
17.01
20.42
23.95
26.41
26.52
24.32
21.13
18.02
15.39
13.26
11.57
10.21
9.11
8.2
S
for Testing Inverse Gaussian Mean (Table Gives
i · 1, 2, 3
\.1 1 •
2
ASN(3)
« • B • .05, Do • 40)
% • 1.5,
~-1x
1
1.25
1.5
1.75
2
2.25
2.5
2.75
llO • 2,
ll1 • 5
ASN(2)
ASN(3)
21.12
26.22
33.09
40.45
18.69
23.01
28.37
33.38
35.41
33.16
28.56
19.21
'23.79
29.44
34.31
35.69
33.19
28.81
4~.S3
for
14.64
17.61
21.2
24.69
26.78
26.56
24.41
21.43
18.46
15.87
13.73
12
10.6
9.lt6
8.52
ASN(l)
42.18
35.85
2.9.4
3
3.25
3.5
4
4.25
4.5
4.75
~-l
A )(
1.&
3.5
3.8
4.1
4.4
4.7
5
5.3
1J 1 •
ASN(l)
ASN(2)
14.33
17.94
22.86
28.04
30.42
28.08
23.42
19.05
15.65
13.1
11.19
9.72
8.57
7.65
6.91
12.69
15.76
19.61
23.00
24.12
22.09
18.71
15.52
12.96
10.99
9.48
8.29
7.35
6.59
5.97
ASN(l)
29.41
24.2320.29
17.31
15.02
13.23
11.8
ASN(2)
23.89
20.02
17
14.64
12.81
11.35
10.16
4
ASN(3)
13.03
16.28
20.35
23.69
24.27
22.12
18.91
15.85
13 • :"~
11. 36
9.82
8.6
7.1'3
6.84
6.19
ASN(3)
24.37
20.59
17.56
15.18
13.29
11. 78
10.55
e
e
Table
Operating Characteristic of the Test S
t
0.2
-0.1
0
0.1
0.2
0.3
0.4
w
oe(l)
0.9841
0.9727
0.953
0.9196
0.964
0.7'17
0.652
OC(2)
0.9783
O.96~4
0.9414
0.90~3
0.846
0.759
0.64
The Table Gives (a2/~) E (nIt, a) of S
0'\
e
2.9.5
for Testing the Lognormal Parameter
'0 • 0,
OC(3)
0.984
0.9716
0.95
0.9134
0.854
0.765
0.643
(a • B • .05. nO • 60)
'1 • 1
l;
0.6
0.7
0.8
0.9
1
1.1
1.2
OC(l)
0.3~8
0.223
0.136
0.0801.10.0"'7
0.0273
0.0159
Table 2.9.6
for Testing the Lognormal Parameter
1:
0,
'1 • 1
0
l;
OC(2)
oe O )
0.36
0.241
0.154
0.C9S7
0.0586
0.0356
0.0217
0.357
0.235
0.146
0.0866
0.05
0.0284
0.016
(0 •
a•
.05, nO
a
40)
iii
NC/'!:1) x
- 0.2
0.1
0
0.1
0.2
0.3
0.4
ASN(l)
4.5
S.15
5.95
6.95
8.15
9.45
10.56
As~t (2)
3.93
4.46
5.11
5.88
6.75
7.65
8.38
ASN(3)
4.07
4.63
5.3
6.09
6.95
7.79
8.43
~a2/1:i)x
0.6
0.7
0.8
0.9
1
1.1
1.2
. ASN(l)
ASN(2)
10.56
9.45
8.15
6.95
5.95
5.15
4.5
8.38
7.65
6.75
5.88
5.11
4.46
3.93
ASN(3)
a.lt3
7.79
6.95
6.09
5.3
4.63
4.07
CHAPTEll
I II
DISCRUIlNATION AMONG k ; 2 HYPO'mESES IN THE PRESENCE
OF NUISANCE
3.1.
p~mTERS
Introduotion
•
Sequential procedures to choose one of the k simple hypotheses
have been extended to the case wben nuisance parameters are present.
Section 3.2 contains an extension of Armitage's method [3] in case the
distribution of the observed random variable satisfies a condition corresponding to condition (AI) of Chapter 11.
Although it was only practicable
to deal with the case of a single nuisance parameter, the theory holds for
multi-dimensional nuisance parameters for which an estimator is available
either by previous experiments or by a first stage sample.
Section 3.3 is
a generalization of Sobel Wald test [28] for choosing one of the three
hypotheses of a normal mean, to the case when the variance is unknown.
shall note that for
test S
k· 2, the test of section 3.2 is identical with the
of Chapter II for testing
HI
against
R2 ;
but for
k· 3, it
is not generally the same as the test of section 3.3 for discriminating
3.2.
An Extension of a Sequential
Let
We
z.;'i$~rimination Procedur-e
Xl' X , ••• be independent and identically distributed with
2
probability c!ensity function (or probability function) g(x, 91 , e2).
The problem is to choose onc of the
k hypotheses
(3.2.1)
The test should be such that
r · I, 2, •• , k
(3.2.2)
(3.2.3)
...
say for some
...
8
2
t · t(6 2 , 8 2 ) where
is an estimator of
8
2
f
is independent of
xj '
~
and
0 •
obtained from a preliminary sample of fixed
size nO'
Assuming
f~s(t) iis positive a.s., let
Bsr(s. r • 1, 2, ••• k;
-B fsr(t)
k
).
I E (e ar 2
y •
r
8 ~
r)
k(k - 1)
constants
satisfy the equations:
(3.2.4)
r • 1, 2, •• , k
s-l
stir
where the expectation is taken over the distribution of
sssumed independent of
...
t · t(a , 6 )
2 2
82,
Writing Bar· In Asr ;
the following procedure S provides a solu-
tion to the problem (3.2.1):
(i)
Accept
n
.l.r'"
Hr : 01 • ~lr"
n Yj (02)
j-1
~
A .I.'
r
r · 1. 2, ••• k - 1 if
t . 1. 2••• , r - 1, r + 1••• , k
38
(iii)
If neither (i) nor (li) holds at stage n, then continue by taking
the (n + l)st observation
(3.2.5)
If the procedure does not terminate at stage n
one of the following
(A1:S )
-1 <
~
n, then at least
relations must hold for every n
n rs"
n Yj (0 2) < Asr '
j-l
r <
8·
~
-n
(3.2.6)
2,3, .. , k
relations in
-
(3.2.6) hold for n • n
k
t Pr{(A )-1
- r,s-1
rs
<
<
irrespective of the previous stages}
n
rs'"
n Yj
j-l
(6 ) < A }
2
sr
r<s
But the event inside the brackets gives the continuation region of the
conditional S.P.R.T. (given
-1
e2)
h
with decision boundaries «Ars ) ,Asr )
whose likelihood ratio is a product of
identical p.d.f.'s. Thus the
n
procedure terminates with probability one. Thus
,.
k
I Pr{H is acceptedlH • e2 , 62} • 1 - Pr{Hr is acceptedlU r , 8 2 ,
8-1
s
r
e2l ,
8~r
(3.2.7)
r • 1, 2, • OJ k
...
Now if we consider the conditional test S(8 2 , 8 2) defined by the
decision rule:
(i)
Accept Dr
n
~
,.
6 • 81r ,
1
r · 1, 2, •• , k - 1 1f
Zsr(8
2 . .• r - 1 , r + 1 •• • , k
j
2) -> Brs frs(t)
2
, 8 ·1 "
j-l
39
(11)
Accept
(iii)
~:
81 • 9lk 1f
Otherwise proceed to next stage of sampling,
(3.2.8)
then the decision rules in (3.2.5) and (3.2.8) are the same.
(3.2.8)
implies
~
~
Pr{S(8 2 , 82 ) accepts HslR r , 6 2 ,
,.
e2 ,
(sr
9 2 }exp -B ar f 2 (t»
~
A
a2}
sr
exp(-Bsr f 2 (t»
~ Pr{S(6 Z' 02) accepts
for
8,
r(s
~
HsIHs '
r) • I, •• , k
(3.2.9)
Suraming over
s, we obtain,
It
,.
t Pr{S<9 , 9 ) accepts
2
2
s llt1
HelH r , 9 2 ,
.rJ.r
~
Thus using (3.2.7), 1 - Pr{S(9 2, 82) accepts HrlHr , 92,
e2l
~
sr
t exp(-B f 2 (t», which implies after tak1ng expectations and using
s-l
sr
s;r
It
(3.2.4),
1 - Prts accepts Hr IB r } -< Yr , r • 1, 2, '"
k;
which gives (3.2.2)
S011f8 Special, Cases
I)
Testing the Mean of a Normal Distribution
Xl' X2 ' •• ,
Br :
~
•
~r
are independent N(p,
0
2
).
' r • 1, 2, --, k , a unspecified.
The requirements on the operating characteristic are
Pr{Hr
i8 acceptedlp • ~ r } -~ 1 - Yr ,
40
r · 1, 2, •• , k.
no
~2 • (nO - 1)-1 I
sample {xl', '"
1
A
_
"" . .
~)o
nO
i~
x' ) ,
nO
rs
Zj (a)· Aer (xj -
x'
(Xl -
j-l
nO
2
sr
)2
is obtained from the first stage
being the mean of this sample.
where
~
• p.
sr
- p
r ·-A rs
f r s (t) • ~21a2 • t , say.
2
(~s + Pr)/2 • ~rs ;
Then
and
Density function of
t
is
tioD, (3.2.4) is simplified to
It
.-1
Y r
+~) 0 ,
s~r
Yr - Y,
B. r
-v /2
2B
t (1
vo
r . I, 2, •• , k.
In the special case
r · 1, •• , It , we may choose, in particular,
= (vO!2)«k
Y
_ 1)
-2/v O
- 1)
for all., r • I, 2, •• , k,
•
,
In this case we also obtain by taking expectations in (3.2.9),
PrIB. 1s acceptedlH r } ~ (l +
D sr -v0 /2
v;-)
• It _Y 1
for all
8
~ r,
s, r • I, •• , k
The test follows the decision rule:
Accept
(1)
n
Art t xj
j-l
(11)
r • I, 2,
~
Accept ~:
.
D
"2-
nAriP ri + OlB r1 ,
U • lIk
• • t
k - 1 if
i · I, 2, •• ,
if
"2_
A
t Xj·~ nAllt~ik - alSt1 ,
1k
1 · I, •• , It - 1
j-l
(111)
r - 1, r + 1, •• , It
Otherwise proceed to (0 + l)st stage.
41
~
r
II)
Testing Location Parameter of Laplace Distribution
are independent with a common p.d.f. (2.S.1) •
Xl' X2 •••
Taking DO· 2m + I, • being a positive integer, then as in section 2.S,
n
A
82 •
0,
I
1-1
IXi
... ,
-
xl/no
--
.
x being the median of this sample.
t
Since z;s(e 2) • {Ixj - e1rl - IXj
rs
...
with £2 (t) • t - 8 2 /8 2 , all r, 8;
of
i8 contained in (2.5.2).
t
8ls 1J/o 2 ,
-
I'
~ s.
(All's) is satisfied
The exact distribution
Using this distribution, we obtain from
(3.2.4), after integration and simplification, in notations of (2.5.2),
1, 2, ,., k.
r -
'l11i8 will determi.ne
tion of
1, 2, ."
B
ar
for any e.J r,
Using the approximate distribu-
given by (2.6.1), the equations satisfied by B (s, r •
.1'
t
8
It,
It
I (1
\-.
~ r)
are
DOW
simplified to
B -(2nO·l)/2
+ nar)
,
.-1
It.
r - 1, 2. ."
'Yr -
If
0
y
for all
.~r
1',
we may take
y
-2/(2no- 1 )
Bsr - GOC(k _ 1)
- 1) independently of
•
and
caso. it is also inaured that,
.
y
Pr{H is accepted H } < It 1
s
r -
for all.,
1',
s
The test follows the decision rule:
(1)
Accept
HI': 81 • ell"
r · I, 2, ."
42
It - 1
if
~
r.
r.
In this
n
•>:
J:::.l
{I x .
Hk~
Accept
(ii)
(iii)
e11 I - I::j
-
J
-
61 ... 0lk
e1 r I } ~
e2Bd '
i... 1, 2, •• , 1'-1, r+ 1 , •• , k
if
Otherwise continue by taking an additional observation.
Ill)
Testing Inverse Gaussian Mean
The probability density function of
HI' :
~
p •
l'
r · 1, 2, •• , k;
,
X 1s given by (2.7.1)
A being unknown.
(All'S) i8
j ... I, 2, .••
,.
satisfied with
f~s(t). t • ~/);
i-I)
-xnO
from a first stage sample,
t
where
is the maximum likelihood estimator of
nO
p.d.f. of
for alII', s, r ; s;
being the rrean of this sample.
A obtained
Using
given in (2.7.3), the equations satisfied by the decision
boundaries are reduced to
k
Yr·
t
2B
-1)/2
O
,
-(n
(1 + nal')
s-l
r · 1, 2, •• , k.
A particular solution
0
s"'r
for the case y l' • y , alII', 1s
Bar'" (uO/2)«k
~
-2/(n -1)
1)
0
- 1), for any
8
~
r.
In this case,
we also have, as above,
Pr{HS is acceptcdlHr} ~ k ~
i
for any s ; 1';
43
S, I' •
I, 2, •• , k.
The test will follow the decision rule:
(1)
Accept Hr :
n
~
~
N
--1 -1
-1"
+ AB l '
t X J1 A ~ nAi
j-1 j 1r 1r
r
-1
-1
= 1,
r
,
r
-1
r
--1
2,
if
k - 1
1· 1, •• , r - 1, r + 1, •• , k where
-1
usr· ( Us + Pr )/2 • Prs and 6sr •
(li)
'.J
-1
~8
II
~r
-
1
,,-
1
-6-rs
•
Accept Hk : P • ~k if
n --1 -1
-1
A
t XjUkiAk1 ~ nb ki - AB ki •
j-l
(ii1)
i - 1, •• , k - 1
Otherwise continue by observing xn+l
IV)
Test1ng a Parameter of Lognormal Distribution
The p.d.f. of
X i8 given by (2.8.1).
Hr : t • t r ,
r -
1, 2, •• , k;
not known.
0
Let a be the maximum likelihood estimator of
stage sample of size
Density function of
(A~8) is satisfied with
nO'
t
obtained from a first
0
• t • oA2/0 2.
f raCt)
2
i8 a. in (2.7.3) and the equations
decision boundaries are the same as in case III).
the test is the same as in case 1) with ~
and
detel~ning
the
The decision rule of
Xj replaced by
t
and
In xj (j • 1, 2, ••• ) respectively.
3.3.
Disoriminati.ng 7'hNe lIypotheaes About a Normal Mean, the
Variance Btling Unknown
X _ N(e,
0
2);
(e,
0
2) is unknown.
The problem 1s to discriminate
among three mutually exclusive and exhaustive hypotheses:
B2 : a 1
~
e~
a
2 ,
H3 :
e>
sample (xl' •• , X } gives
nO
A
a
a2 ;
2
--
<
a1 <
• (nO - 1)-
44
8
2
< ~.
Hl :
e
< 8
1 '
The preliminary
1 nO
2
t (x - x ) •
i-I i
nO
Our
"2
conditional tests for given a
will be based upon the method developed
by Sobel and \laId f2R]
~l1d
the method of Chapter II will he modified to
determine the decision constants.
e1
i)
< a 1 < &2
5- eJ
< 82
ti)
84
<;
(3.3.1)
A wrong decision is defined as acceptance of H2 or H for e sal'
3
acceptance of H for 61 < e < 8 2 • acceptance of Hl or "3 for
3
. 8 2 ~ 8 ~ 8 3 J acceptance of HI for 8 < e < 84 and acceptance of Hl
3
or H2 for 8 ~ 84 , The test is to be such that
i) Pr{Wrong decisionle ~ ell
5- Yl
1i) Pr{Wrong decisionla 1 <
e
<
84} 5-
i11) P·r{ Wrong decision Ie ~ e4} ~ y3 •
Y2
Let
Rl
be the S.P.R.T. for testing He
Xno
8 2 ' based upon
1
'
+ '
nO 1
x
X
+ ' •••
oo 2
.
: 8 • 91 against He : e •
2
with termination boundar1~~
A1 and Bl and with a replaced by a where Al'>l), B «1) are to bG
l
suitably determined. R follows the decision rule:
l
n
"2
(H ) : ~ ln Bl
81
+ nal
<:
"2
< j:lx j
In Al
~
+
Da l : (H
82
"
R1 Ca, a) of He against
1
X +1' ... with termination
n
For given (a, a). the conditional S.P.R.T.
based upon the observations
boundaries
i
nO
J
o
"2
"2
(where 1n Al • (In Al)O' 2 • In iiI • (In Bl)o2)'
a
follows exactly the same decision rule as
as the S.P.R.T. for testing
termination boundaries
)
He : e •
3
Rl •
a
Similarly we define R2
8 3 against
He :
4
8 • 8 4 with
A2 and B2 where. for the present A t B2
2
satisfy
45
(3.3.2)
R2(O, a) is defined analogously
~~th
boundaries A2 ' B2 " As in [28],
R
i8 defined as:
oJl
1
accepts 83
accepts 9
3
accepts e
accepts 81
accepts 8 2
e2
accepts
accepts Hl
accepts B2
accepts H3
4
al
accepts 8 1 implies &2 accepts
R2 accepts 9 4 implies III accepts 9 2 •
A necessary and sufficient condition for (3.3.3) to hold is that
-2
A2
-2
A2
(1
a
a
a
Jl
e
r
0
is well defined if
1n A1 + a l ~
1.e.
~ ~ A
r
1n A2
+ a 2 and
r
In B1 + a l ~
A
ln B2
and
3
(3.3.3)
9
+ a2
and Bl ~ 1 2 which is implied by (3.3.2).
2
OC Functions
,.
18.
Let L(Ba
(a, a»
Il
i
j
denote the probability of accepting
ej
in
A
e 1s the true mean, (i,
IL (0, 0)
i
when
(2, 4).
Writing
2(a
hi •
i
j) •
(1, 1), (1, 2), (2, 3),
- 8)
A
;
and so L(\
j
19,
1, 2 ;
i •
- a» •
Iti (0,
(1, 1), (2, 3).
; (i, j) •
-
Let L(Bi 1e, 1(0, a}) denote the probability of accepting Hi ;
A
i • 1, 2, 3 ;
when
e is the true mean and R(a,
a)
is the procedure
...
used and let L(Rtle, ll) • E L(Hila, R(a, a»;
LeR a
j
Ie,
at) • E L(B e
j
te,
Rt(~' a»; (1,
46
j) •
i.
I, 2, 3 ;
(1, 1), (1, 2), (2, 3),
(2, 4).
fe,
L("O
4
L( H2Io• R) • 1 - L(Ila
R2) and
1e,
i
2h (C
t
{I +
and for
E [{ 1 +
r=O
R ) =
t +
ree i +
nO _ 1
h
t
Ia,
D
t
»
t
i
n -1
_.L.
2
i!o}
nO - 1
"0- 1
-~
}
J = (9 ij) 1
' say;
< 0,
_"0- 1
2h t (r + l)(C t + Di ) ~
{I nO - 1
} ] .. (sij)2 , say;
Thus
(i, j ) .. (1, 1), (2, 3).
L(Rlle, R) • (sll)l for h l > O'
L(H
3
!e,
• CI '(C1 + D1 ) for hI· 0
R) • 1 - (s23)1 for h 2 > 0
.. D / (D + C )
2
2
2
and
c
RI ) - L("O Ie, R2).
4
and simplifying we h~ve for
+ D)h
2r(C
Ql)
ej
R!), L(u3Io, R)
1
1
Integrating over the density of
L(H
Ie,
L(H1Io. R) • L("a
By (3.3.1), we have,
for
h .. 0
2
L(H2Ie, It) • (9 23 )1 - (s11)1
for
h
• (a 23 )1 - (s11)2
for
hI < O. b2 > 0
• (8 23 )2 - (8 11)1
for
hI > O. h 2 < 0
• (8 23 )2 - (8 11 )2
for
hI < 0, h2 < 0
expressions for
L("2Ie, R) for
l
> 0, h
2
> 0
hI· 0 will be obtained from this for-
mula by replacing (811) 1 by
C1
, and fur
C
C1 + D1
2
(8 23 )1 by C + D ' i • I, 2.
2
2
47
h 2 • 0 by replacing
/tfonotonicitJi l'r'opel't:ios of Probah1'.UtiaD of COTTect Decision
the nbove defin1tiooR, probahility of a correct dp.cision (to be
By
L(eIR») equals
denoted by
e
Ie,
L(lla
Ia,
1
for
R )
1
R ) - L(U Io, R ) for
1
2
e4
1 - L(B a Ie, Ill) for
< e < 6 ' L("e Ie, R ) for a ~ 8 ; where,
4
4
2
1
4
at the points of discontinuity, L(aIR) is defined to be the smaller of
61
<
8
<
2
,
1 - L(11
61
6)
If
by replacing R
RiCa. 0), i
i
'"
is obtained from this definition
1, 2;
c
f~r
then since
...
(0, 0),
L(B
by
0»
L(eIR(~,
two limiting values.
a3 Ie,
each fixed
.
0»
R (0, a) 18 a normal mean S.P.R.T.; L(H Ie, Ri(a,
and
e1
1
'"
1l (a,
are continuous in 8 with continuous first and second
a»
2
derivatives and are monotonically decreasing for all
point of inflexion in the intervals
respectively.
L(eIR)
.
8
1
...
<
e
8
2
and
...
(0, 0),
Thus for each fixed
<
e
8 < 8
3
L(eIR(o, 0»
satisfies properties (1) - (v) of [28) (p.S08).
with a single
<
6
4
and therefore
In this chapter,
we shall refer to them as (1) - (v).
Choice of A.#
B.;
i. = z.# ;1 to SatisfY
11Requi1"etntmts on the OC Functions
Since
L(eIR)
satisfies (1) - (v);
Ai' Bi ' 1 • 1, 2 : are to be
V1 • 1-L(H& la l ,R1}, Y2 • L(B a le 2 ,RI ) + L(B a la 2 ,R2),
1
1 _
4
V2 • L(H e /e 3 ,RI ) + L("8 le 3 ,R2),
Y3 • 1 - L(Ha 1° 4 , R2). Since
1 4 4
chosen such that
h1 (8 1) • 1, h l (02) ~ -1, h (S3) & -2(8 3 - al)/~ • -p , say (p > 0),
l
h 2(8 2) • p , h 2(8 3) • 1. h2 (6 4) • -1; we have after integration and
n -1
simplification,
n -l
o
O
. ..
2(C + t'(C + D
2(r + IHe l + DI ) I
l
I
{l +
n - 1 -}
)
Vl • t [{I +
n - 1
}
reO
0
o
» -,.-
-r-
(3.3.4),
48
•
+ t [{I +
r-O
(3.3.7)
We may write (3.3.4) - (3.3.7) as Vl· tl(C l , D1), V2· t 4(C 1 D1) +
t (p, C , D ), Y2 = t (C , D ) + t;(p, C , D ), Y3· t (C , D ). Thus
4 2 2
2 2
1
l
2
l 2 2
for
C1 • C 2 •
'12 • t l +
ti,
C, D1 • D2 • D; we obtain Yl
all
t
l ,
Y2· t 4 + t 2 '
'13· t 4 •
We note that the equations '11 .. t l and Y3" t 4 are the same as
equations (26) and (27) of Baker [5]. Solutions for these equations in
the special case C· D for different values of nO
.01. Vl
c
V3 • .OS are given in Table 4 of (5).
and for Vl· V3 •
Since for
C· D,
and t 2 {p, C. D) - ti(p, c. D) • f(p). say; (3.3.4)4
(3.3.7) further simplify to Vl • t 1 , Y2· t 1 + t 2 , Y3· t i • Thus for
t (C, D) • tl(C, D)
'11 - Y2 • Y3 ' we have,
of C(-D)
as given by
right side of
V·
t
1
V· t 1 , Y - t 1 + t 2 • Thus if we use the value
V. t 1 ' we are neglecting the term f(p) on the
+ t 2 • Considering f(p)
49
negligible 1f the result
of neglccting produceR a change of less than 20% in
O2
,° 3
f()"
p
t
.
Le., if
n -1
[{1+lP.~(1+2r1l
n-1·
pO
0
t
0
-----
_{1+5J!.~(1+r2}
n-1
0
2
we see that in all such cases, we may safely use
2
] ~y
a1 < e
~
11 R(a,
a» • L(He
bounds);
Ie p
1
n -1 -
- 1),
Thus since
1
~
-=
R1 (a to» > 1 -
I
...
le
~ 1 - L("e
1
a»
= B2 •
1 - L(lia
...
R
(o.
0»
•
2
1
(by using Wald' s ~oDservative
_D
L(a1la) > 1 - (l
y1
e ~
for
-1
O
+ n2~l)"-
o
1
4
le 3 •
(3.3.9)
a» - L(Ha la 2 ,
je 2 , RICo,
- L(Ha
e1
4
~
R2 (o,
a»
...
R2 (o, 0»(3.3.10)
1
and after
i-I
> 1 ~1
L(e 21Il) > 1 - (l + ..l1L)---r-
taking expectations,
(1 +
DO-l
~l
~)~•
n -1
O
1-Y 1 - Y3"
L(63111{~, a» -=
a1 le 3 , R1(~' a» - L(He 4 le 3 , R2(~'
1- L(B
...
...
0»(3.3.11)
1
0»
?. 1 - L(He 18 2 , R1 (a, a» - L(Ba le 3 , 1l2(a.
> 1 - '; - i ;
1
4
n -1
A
n -1
o
0
which implies, L(a I Il) > 1 - (l + 2C )-r- _ (l + 2D )-r
3
nO-1
nO-1
1 - Y1 - y 3
and since
L(e4IR(~,
0» • 1 -
.
L(e I R) satisfies (iii) - (v), we have.
L(alll) > 1 - VI - Y3
for
81 < 0
L(Ba 164. ~(;,
3
B
(3.3.8).
- 1)
A
A
III
e 4"
L(eIR) satisfies (1)
L(e R) > 1 -
L(62IR(o.
;-:r
D· -In B • +(Y 3 0
we have, after taking expectations,
• 1 - Y1'
<
1 ;
2
i'i""'=l
C • In A ... +(Y1 0
. L(e
p
2
-
t
AI· A2 • A and B1
We shall examine the effect of choosing
n -1
/5 ,
C given by y •
not effecting much the probability of a wrong decision in
satisfying:
a.
at
n-l
o
-----
.
L(eIR»
(1 -
0»
50 .
< e
(3.3.12)
4
> 1 -
i.
which
i~lie8.
n -1
(1
o
+ ~)-'"2 •
n -1
1 - Y3
and since L(OII)
satisfies (ii),
O
I
L( 8 4 R) > 1 - Y3
for
a~
(3.3.13)
84
From (3.3.9), (3.3.12) and (3.3.13) we see that the choice given by
(3.3.8) will satisfy requirements if V2 ~ VI + V3' Also from (3.3.10) ...
(3.3.11), we have, by using Wald's conservative bounds, L(82IR(o, a» >
-
..
1 - B - L(Ha41e2' 1 2(a, a»,
...
c»
L( 83 IR(o,
>
1 -
1
AL{B IS3 ,
eI
...
R1 (a,
L(&2 1R> > 1 - Y3 - t 2 ' L(63IR)
and after taking expectations;
a»
>
I - VI - t~.
ti •
C· D (VI • V3 • V, say), t 2 •
f(p), we have, by
considering f(p) negligible if result of neglecting produces a change of
Since for
le8s than 20% in
(1 - L(aI R»
8· 82 , 83 ;
at
for
f(p) ~
vIS;
using (1) - (v);
L(ellt)
> 1 - Y
e~
for
61
t
e~
9
4
L(eIR) ~ 1 - Y for
;
(3.3.14)
While it is easier to compute C and D by this method, the inequalities
in (3.3.14) may not be close and this choice may result in a larger sample
size.
Bounds fop the A.S.N. Function
Let
R be the procedure which continues to take observations until
,.
R be the procedure which continues to take
1
observatio,ns'until R accepts 0 • Then the following result holds:
2
4
11
accepts
8
-
and let
!II
m1n[E (N; at a)t E (N; 0,
0)] ~
E (N;
a,
a)
~
maxiE (N ; at 0), E (N 2 ; a, 0)]
(3.3.15)
l
where E (N; a, a) iR the A.S.N. function of the procedure Rand
51
~
e,
• a,
E (N;
E (N l ;
e,
E (N ; 0, a) are the A.S.N.
2
functions of the procedures R, R, R and R respectively and,
1
2
neglecting excess of the cumulntive sum over the boundaries, are given by,
E (N;
0),
0),
- .
a) and
D1
(3.3.16)
E (N; 0, a) • --.::;.....~(al - 9)
a
(3.3.17)
(3.3.18)
hi > 0;
(3.3.20)
i · 1, 2
Moreover,
..
1)
a,
E (N;
iv)
E (N; 8, a)
I (N;
Proof:
for 8
a)
a,
0) a
~
..
E (N;
~
04
E (N 2;
e,
iii)
e.
Let E (n; e, al~)
0)
for
a)
e~
81
E (N; 0, a)
for
~
1i)
E (N l ;
0 > 8
E (N;
a,
a)
a,
for
a) •
e
O2
(3.3.21)
<
3
denote the expected sample size in the corres-
..
ponding conditional test for
.
n· N, Nl , N2 , N, N.
Using Wald's first identity, we have, under the above assumption,
52
2
-DI ta
E (N, 6, ala) • h{O _ a ) :
1
..
...
integrating, we get (3.3.l6).
Multiplying by the density of
(3.3.17) follows similarly.
ane
t
(3.3.21)
follows directly from [28].
Left band inequality of (3.3.15) follows from the definition of R
•
and R and the right hand inequality is implied by the relation
Pr{N>
nle,a} •
PriN 1 > n, NZ "
E (N i ,
e
J
nl e.
PriNl >
nle,
a}
Cib i
a}.
+ Pr{N Z > nle,
a} -
-Dihi
t
t
D t(l - e
) + Ci tel - e
)
i
ala) •
Cb t
-D h t
•
(e i i _ e i~')A (0 _ 8 )(1-2
..
i
Rearranging, expanding the denominator in series, applying Fubini's
.
-Ot
Theorem and using E (te
we get (3.3.20) for
)
(0 > 0) ;
hi > O.
For
which is (3.3.19).
Example:
Y3 • Y • .05;
a· 1, nO • 31 ;
then 6·
1 1 1
'8'
p. 7,
8
1 •-
4"
8
2 •
4"
6
1 , c. 3.22; f(p). 1.05580251 x 10- < ~ • • 01. From
7
(3.3.8), C • D • 3.315829502; f(p). 8.095191134 x 10- <~. For a-
From
y -
t
known, we take C • D • In 1
1.564022966 < p
;....1.
(see p.Sll of [28]).
lower bounds on the A.S.N. of
given by
y. t
since
1 ,
a:
k. (In S/y)/ln 1 ; Y •
(3.3.15) gives following upper and
Notation:
UBI· Upper bound using C • D
UB 2 • Upper bound using C • D given by (3.3.8),
53
VB 3 • Upper bound for Sobel Wald test (0- known).
i •
1, 2, 3;
are the corresponding lower bounds.
54
Similarly Li i ,
e
e
Table
\It
\II
e
UB1
LBl
5/16
6/16
1/16
8/16
9/16
10/16
12/16
14/16
16/16
18/16
20/16
"12.16
206.08
137.39
103.04
82.43
68.69
51.52
41.22
34.35
29.44
25.76
377.91
204.74
137.29
103.03
82._3
68.69
51.52
41.22
34.3S
29.&f.4
25.76
UB2
~24.43
212.21
141.48
106.11
84.88
70.74
53.0S
42.44
35.37
30.32
26.53
e
3.3.1
LB2
VB3
391.87
211.02
141.39
106.1
84.88
70.74
53.0S
42.44
35.37
30.32
26.53
376.89
188.44
125.63
94.22
75.38
62.81
47.11
37.69
31.41
26.92
23.56
La3
339.2
187.4
125.59
9lJ.22
75.38
62.81
47.11
37.69
31.41
26.92
23.56
CHAPTER IV
SOME TESTS WITH THE USUAL DECISION BOlmoARlES
4.1.
IntPOdUcticn
In this chapter, we study some of the cases when the probability
density function S(x, 81 , 82) of the randoa variable X does not
satisfy the assugption (Al) of Chapter II. For the problem (1.3.1), the
decision rule of our test is given by (1.3.2) with A· (1 - a)/a ,
B·
aiel -
a). The actual strength of the
t~st
in this case 1s (a'. S')
..
where a' • 1 - Le8 l0 • 82', 8'· L(e ll , 82), L(e l , 8 2) • B L(8 1 , 82 , 82)
where under the assumption of negligible boundary overlap,
if h· 0
• C/(C + D)
with C • In A,
D· -10 Bi
b· heel' 82 , 82) being the unique solution of
(1.3.3).
In the notation of section 1.3 and under the above assumption, the
A.S.N~ function 1. I E (nI8 l ,
E
if
(nlo l ,
..
E Z(8 )
2
82,
82,
82) where
82) • (D(l - e Ch) + e(l - e-Dh»/(eCh - e-Dh)!
z(e 2),
rI 0
2 ..
• -CD/E Z (8 2)
A
(4.1.2)
2
Conditions (el)'and (C2) of Chapter I are satisfied by the tests of
sections 4.2, 4.4, 4.8, and 4.9.
if
E Z(8 ) · 0
copsequently the power functions
as well as the A.S.N. functions of these tests are independent of the
nuisance parameter.
4.2.
A Teat of Randomness Whoue POil.,'er Punction and A.S.N. Function
Q.l.ge Independent of the lluisance Parameter
Let
Xl'
X2, ...
be independent with a common probability density
function
~ >
0;
~ >
0;
x
~
(4.2.1)
0
We consider testing
HO : v • "0
against HI:"· vI;
).
(4.2.2)
being unknown
The hypothesis of randomness of a series of events against the alter-
HI is a particular case of HO' Under the hypothesis of random-
native
ness, the probability of an event in any time interval 6x 1s
independeni:ly of other events for S01lle >. > O.
~
-1
6x,
Thus the intervals
xn be~een successive events are distributed independently with
-1
a probability density function . ~ exp-x/>., x ~ O. Thus (4.2.2) reduces
Xl' '"
to randomness hypothesis for vO· 1•
After determining .
..
~,
first stage sample of s1ze
nO;
Il
...
Zj(~)'
j.
..
~(>')
<
from a
the test procedure at the nth stage of
sequential sampling is:
(HO) : -D
>.
the maximum likelihood estimator of
...
t Zj(~) < C : (HI)'
j-l
where conditionally on
.
~
fixed,
I, 2, ••• are independent and identically distributed as
= In
r"O
r~;
+ (vI -
~O)(ln X -
and where C and D are as in section 4.1.
57
..
In >')
(4.2.3)
After integrating and simplifying, equation
rv O i
v -v
0 1 h
[rv(I)
Tv
].
1
r1"v +
h(v
- v
1
0
dcte~iniug
h
reduces to
(4.2.4)
»
A
~
(~,~)
Since h depends upun
only through the ratio
since by Theorem 1 (p.82 of [4]),
A;
A.
Let
Fv be the distribution
Then the operating characteristic is:
t.
L(v) - IL(VltldFv<t)
(4.2.5)
is given by the right side of (4.1.1),
where L(vlt)
and
is distributed independently of
t
the power function. does not depend upon
function of
t · ~/A
h· h(v, t)
being
as in this section.
V.ins
E (In X) • 1n A
(In A + ~(v»2 where
..
fvO
B Z(~) • 1n fV
and B
2 ..
Z~(A)
1
fv
+ ~·(v)
and
E (In X)2 • ",(1) (v)
+
~(v). ~ In fv· we obtain, after simplifying,
cv
•
i
+ (v O - vl)ln I + (vI - VO)W(v)
2
2
• (In TVO) + (v 1 - vO' (In
1
2
(1/I{v» ) - 2(v 1 - vO' (In
fv O
i 2+
r)
2 (1)
(vI - vOl (t
(v) +
i
rv/ (In "i) +
(4.2.6)
2(v 1 - v O)(ln
rvO •
rvl)'(V)
(4.2.7)
Since E
""
Z(~)
2 ,.,
and E Z (A)
A
depend on
A
(A, A) only through t · AlA ,
.
the A.S.N. function of the test is independent of . A and is obtained from
E (nlv) • JE
where E (nlv. t)
(4.2.8)
(nlv, t)dFv(t)
is given by the right side of (4.1.2),
with
62
A
replaced by
A.
Tables of cumulative probability of
l4].
t
are given by Bain and Antle
For large nO' we may use asymptotic distribution of
58
t
given by
(4.2.9)
4.3.
Testing Scats
ParametCJ~
of a Garrma I>istl'ibution
are independent with a common probability density given
by (4.2.1).
The null and the alternative hypotheses are
HO : A aloa
and HI: A = .\1.
v > 0
(4.3.1)
being unknown
v, the maximum likelihood estimator of v is obtained from a first
stage sample of size
nO'
The decision rule of the test at the nth stage
of sampling is:
n
~
A
I Zj{V) < C : (HI)' where conditionally on v fixed,
j-l
j . I, 2, ••• are independent and identically distributed as
: -D
...
Zj(V).
<
..
..
-1
-1
Z(v) • vln ('\0/).1) + X(>,O - A )
1
and where C and D are as before.
hZ()
v •
E e
.h~
(A O/>'l)
E e
(4.3.2)
For. >'1 > ).0
h(A~l -):~~x
exists for
h
-1
<).
-1
-1
I(AO - ~1 )
and conditions (1 - IV) of Wald [31] Lemma 2 are satisfied for these
values of h.
h reduces to
After integration and simplification, equation
dete~ning
.
().o/Al)hv/v • 1 -
h>.OOl -
Ail)
(4.3.3)
A
With h· h(A, v, v)
i8
L(A. v) - IL(A. v.
where
~)dFv~~)
Fv is the c.d.f. of
of [4]) and where
to..
given by (4.3.3), the operating characteristic
.
v, ",)
(4.3.4)
v (which is independent
of A by Theorem 1
is obtained from the right side of (4.1.1).
S9
Also,
...
-1
-1
+
2i.v'J.().O
E Z(v) • vln CAO/A l ) + (A O -A l )AV ;
2
A"
E Z (v) • (vln (AOI"l»
v(v
2
'"
-1
(4.3.5)
-1
- Al ) In (>'0/A 1 )
+
+ ~)A2(A01 - Ai l )2
(4.3.6)
The A.S.N. function of the test is
E (nlA, v) • JE (nlA, v,
where
E
(nlA, v,~)
..
.
~)dFv(~)
(4.3.1)
ia obtained from the right side of (4.1.2) by
replacing 8 2 by v.
The tables of the cumulative distribution function of v
Bain and Antle [4] are inadequate for the present purpose.
we may, however, use
~totic
distribution of
t·
..
/DOv
given by
For 1arae nO'
which ia given
by
(4.3.8)
4.4.
Testing the Soa7,.tJ PaNZmete:r of Lap1.ace Dist:ributi.on; the POfi)er
Function and A. S.N. Punotion of the Test Being Independent of
the Location Parameter.
Xl' X2' ••• are independent with a common density function
g(x, 8 , 02} liven by (2.5.1). We conslder testing
1
HO : 8 2 •
Let
61
°20
against Hl : 9 2 •
°21 ;
-- < 81 < •
be the median of the first etage sample
{x~. x
(4.4.1)
2•. .,
x~o}.
The test procedure at the nth stage of second sampling 1s
n
..
..
(HO) : -D < t Zj(8 1) < C : (H l ) where Zj<Ol)'s are independent
j-1
and have a p.d.f.
co~on
with
(4.4.2)
A
where
8
1
is fixed.
60
-1
-1
where q. a2 (8 20 - 8,21).
we see that +(h)
After Int,egration and certain other operations t
qh < 1, the conditions of Lemma 2 [31] are
exists for
satisfied for these values of hand +(h). 1 can be written as:
.
...
(8 -8 )
qh(8 -e 1 )
1
l
8
h
20
( 6 ) {e
8
1
2
+
21
and
e
( 20
8
21
82 }. 1 -
qhe
q~2
if
_. <
<
GO
...
_qb(8 -8 )
1 1
8 -8
1 1
h
{qhe
r
e
2
82
+e
2_ 2
} • 1 - qn
...
if
... <
e1 ~ 8 1
<
CD.
...
Thus h · b(O 2' t)
(8 20 )h {qhe- Itl
8
21
Supposing t · Yr
may be obtained from
+
eqhl
tl} •
1 _
(writing t · (8 1 - 8 1 ) /e 2)
q~2
(4.4.3)
(nO + 1)/2 for no odd and r · nO/2 for
nO even, then the exact probability density function of Yr is
where
r
A
(4.4.4)
for aome positive integer m, then writing
l1ity density of
p(t).
t · Ym+l' the exact probab-
Ym+1 is simplified to
nOI22-(m+l)e-(M+l)lt~1_ e-~tl). ~
<mt)
Also, irrespective of whether
00 is an odd integer or not
61
(4.4.5)
mot -
N(O, 1) (asymptotically)
::;lnce by (4.4.3), h depends on
.
.
eel'
(4.4.6)
61) only through
t·
(8 1 - 81)/8 2 and exact and asymptotic distributions of t given by
(4.4.4) - (4.4.6) are independent of 8 ; the power function of the test
1
will be ind~pendent of 81• We may denote the operating characteristic by
Taking nO • 2m + 1, we have, using exact distribution of
L(02)'
L(e ) . DO! 2-(m+1)
2
(•• )2
L(9 , t)e-(m+1) I t I{l _ e
2
[
_CIII
whereas using (4.4.6);
L(9 2 ) •
[~(92'
.'''art •
where L(02' t)
not
-It' m
} dt
t;
(4.4.7)
2.
2
(4.4.8)
t)e-""2dt
equals the right side of (4.1.1) •
After multiplying the right side of (4.4.2) (with X replaced by x)
and its square by
g(x, 8 , 8 2)
1
and integrating, we Obtain (after
simplification) ,
...
820
K Z(8 l ) • 1n -8 +
21
where
t
and q
q(ftl
+ e-
Itl )
(4.4.9),
have been defined.
The right 8ides of (4.4.9) - (4.4.10) and the distributional forma of
show that the A.S.N. function does not depend on 81 • Denoting it by
E (nI8 2), we have, for nO. 2m + 1, in case of exact dictribution of t;
t
n I
(
E (nI0 ) . 0 22- m+l)
2
(al)
[
.
I
-I tl
III
E (n10 , t)e-(m+l)lt (1 - e 2 ) dt(4.4.l1)
2
--
whereas i.n case of asymptotic distribution of
62
t,
(4.4.12)
where E (nI0 2, t)
...
replaced by e •
l
4.5. Testing an
is
8S
on the right side of (4.1.2),
with 8 2
Gaussian Parameter
Inve~8e
The probability density function of
X is given by (2.7.1).
The null and the alternative hypotheses are
BO :,
>.
-1 nO .
A
nO t xi
lJ •
.
,0'0 <, ).1)
(4.5.1)
1s obtained from the preliminary sample
•• , x J.
nO
vs.
·).0
>. •
H1 :
0 < P <..
),1:
,
i-I
...
The distribution of
lJ
is inverse Gaussian with parameters. )./n
Writing Zj (~) • -21ln
~o
+
1\
O
1
(~
-
"0
f-)( 12 + 21 -~);
"1 2p
xj
l1
and
p.
j . I, 2, ••• ,
X
the decision rule of the test at the nth stage is
(Ho)
A
Zj(p)'s
:
n
-D
<
A
t Zj(lJ) < C : (HI)
j-1
are i.i.d.
and conditionally on lJ
-1
-1
Writing d· (AO -),1)'
A
).
h
[_1
q. -d
,"2
lJ
fixed
,we have,
2
...
+(b) • It ehZ(ll) • ().O)2
(2,1r).x3) 2expl _jx<Cx -21d ) + qh(x - 1l)2)}dX
1
~
0 .
After several rea~rangement•• the rigbt side may be written as,
~-
83
1
r·
3
hO'>'I) le 2/). exp.<).l1 - 8 8 ) ~ ~211'X
1 2
.
2
2 i
8 • lJ{(1 + qhAU )/(1 + qh>'SoI)} •
1
Thus +(b)
exists for
for theae values of h\
01 > 0,
62
-t
62)
8
>
2
exp{-(x - 8 1)
• >./(1
2
/20l02.x}dx where
2
•
+ qb>.u ), 8 3 • 1 + qh:\l1lJ •
0 and Lemma 2 [31] 18 applied
The nonzero values of b
63
2
...
n
b(A, lJt p)
will be
obtained from
(4.5.2)
The probability of accepting BO 1.
rn; J- .
L(A, ).I) • ---- ~(~, P.
12'1f~
"
2
-i
0oCt-).I)
).Ilp= t)t exp(2 }dt
(4.5.3)
2). ... t
0
.
.......) 1s as 00 the right side of (4.1.1). Writing t ·
-1
and using EX· p.E X-1 • A + ....
IX 2 ~ 1p 3 + ... 2 and
A
~(~,
where
In (AO/A 1)
I X- 2 • 3). 2 + 3,....-1 + ...-2 • we obtain. after simplifying.
E
Z(~)
Z2(~.)
and
E·
elt(A
2"
- ";:"
fA
~ + d(~
•
d
•
2· 3A
2
{r
1»
(4.5.4)
1
JC
4
3
..L
JL
3
1
1
.
+4'("'4 --;:-+ji) + "4 - "3 +~-"""+2}+
1I...
411...
211
...ll 411
+.J!... + 1 ) +
"2 21i
...
211
In the usual notation.
(nl,..
~ + jell-l of:
1I
2...
1
E
-
(4.5.5)
1,2
4~
1I) • ----
~
I-
/2w1
0
E (nIA,
).I
•
-jexp{-
t, ... )t
no(t - 1I)2
2
(4.5.6)
}dt
2A1.I t
where I (nIA. ~, ... ) is obtained from the right side of (4.1.2) by
.
replaeing 9 2 by p.
4.6.
InfB1'ences on a Pa:rameter of a Singly :rz.unco.ted NormaZ Distribution
Suppose Xl' X2 ' •••
density function
are independent with a common probability
.
nr- -1exp- ( x g(x, .... 0 ) ·(. (y-:-A-h",.o)
o
-- < A < . ;
A
\.l )2, 20 2.•
(4.6.1)
being known.
The null and the alternative hypotheses are
64
HO :
}.I
Po
D
vs •
0 < a " •
HI: l! • PI;
After obtaining a. the maximum likelihood estimator of a. from
an initial sample of size DO' the test procedure may be expressed as
n
A
(H ) : -D < t Zj (0) < C : (H ) where. writing 6 · \1 - 11 '
o
l
0
1
j-l
-
}.I •
A
(P l
+ \10)/2 and r(a)·
A_2
h.a
t(
\10 - A
A
PI - A
)/f(
'"
);
A
A
Zj(a). In rea) +
a '"
a
B ehZ(a) • (r(;»hexp(-hAii~-2)E exp(bll~-2X).
_
(x - p);
j
j -
I, 2, •••
...
After integrating and simplifying, equation determining h - h(l', a. a)
may be expressed as
h In r(~)
+ In (t(q)/t(1I ~ A» + ~262a2~-4 + h6(p-
...
2"'-2
where 'q. q(a) • (p + bAa a
azo(A; }I)/t(H ;~
- A)/a.
and E X2 -
0
2
p);-2 -
Also since EX·
P
0 (4.6.2)
+
+ 1I 2 + (A + lIlazo'! ; )./)/1(11 ; ~
2
(where Zo(p)· (l/l2i')exp(-
T»;
.e obtain, after simplifying;
(4.6.3)
..
and E Z2 (a)
-
'" 2
'"
(In rea»~
+ 6 2'"0- 4 (p 2 + II 2 + 0 2) + ZAa"-2 «II - ~)ln rCa)
-
probabilicy of accepting B and the A.S.N. function are given by
O
t~lI, 0) - E t(lI. a, 0)
(4.6.5)
.
'"
and E (nlll, 0) • B E (nlll, a,
a)
..
respectively where t(lI, o. a)
from the right
...
'"
82 by a
~And
..
and ! (nlll' a, a)
(4.6.6)
..y be determined
side. of (4.1.1) and (4.1.2) respectively by replacing
determined from the first stage sample . {x~, xi, .. , i.X'l .. } loy
the relations:
~
.
(1)
.. -1-'2
- l5) x
a -
DO
"0
-1 n O~-'2
AI"
~.
<ali )(ox: -
(11)
nO t
6) •
DO
XiX
1-1
DO
(4.6.7)
-'
where
x
nO
1s the mean of the sample and
of the lower truncation point.
.
a. (A - ~)/~
i8 the estimate
While no explicit form of the distribution
of a is available, tables of its cumulative distribution function are
given by Francis [9J.
4.7.
Testing Equa'lity of nAJO Poisson Means
X and Y 1s
The distribution of two independent random variables
Poisson with means
01
and
-° 2
respectively.
The hypotheses to be
discriminated are
(4.7.1)
92 and HI: 81 - 8 2 • d > 0
""
t·
Let {Xl' x2 , •• , x } and - {Yl' Y2' " f 'n}
BO : 81
g
na
. .
two populations and let e - (8
samples from the
-1 nO
nO t
i-l
..
-1 DO
and 8 2 - nO t 'i'
Xi
i-I
ia : 8 1 •
82 • i
HI :
asainst
be the first stage
0
1
..
+ 8 )/2 ,where 6'" •
2
1
The likelihood ratio for testing
A
1
81 • 8 + i d ,
82 ~
A
e-
1
id at the nth
staae of aequential sampling is
n
n [(1 + ~)
j-l
'"
x
l(l -
28
4>
Y
j)
(4.1.2)
28
1
4>
~
For 8 > ld, we write Z(a). X 1n (l +~) + Y ,ln (l and fo1lw
28
28
the following decision rule for testing 8 0 8&ain~t HI:
...
1
Accept KO without taking further observations whenever e ~ id.
Otherwise accept H or HI according as the lower or the upper
O
D
A
inequality in -D < t Zj(8) < C 1s first Violated for
j-l
66
n
~
I
(4.7.3)
.
where Zj(O),
j . I, 2, •••
are independent and have a probability
.
function common w{th Z(e).
Since E exp h(t1X+
2Y)
t
(exp-(8 l + 8 2»exP{°le
co
ht l
ht2..
+02e -, ,
+ 8 2»eXp{Ol(l + ~)h +
28
Thus E exp(hZ(8» • 1 uaplies
11 follows that
d h
= (exp-(Ol
E exp(hZ(&»
A
·02(l -~)}.
28
1 + w • (1 + v)h + v(l - v)h
where v·
°2 /° 1
(4.7.4)
d
and v. -;:::- •
28
Also. aince the distribution of
2n08
(where
°
.
...
n (6 + 8 )
O 1
2
is Poisson vith mean
e· (8 1 + 2)/2), it is verified that
...
e
Pr{(d/2e) • v} •
-2D08
(2n 8)
0
(nOd/v)!
na d/v
;
v· nod/r
(r· 0, I, 2, ••• )
Thus the probability of accepting
A
.
L(O, 8 ,
1
E
1'-0
{v-nod/r}
where LeO, 0l' 8 2) •
...
h • h(G,
°2'8 1)
Since. for
independently of
{l
if v
>
(4.7.5)
1
right side of (4.1.1) if v < 1 •
being.s in this section.
°1... • °2, the nonaero solution of (4.7.4) is
°;
neglecting excess over the boundaries, we see that
-2n08
nOd/v
-
pr{BO is acceptedlol • 8 2} • 1 at
p[nOd)+l
{v-nOd/r}
80
h. 1•
e
(2n 8)
0
> 1 - a •
that the first kind of error probability is alvays less than a.
The A.S.N. function of the test is obtained from
67
..
(4.7.6)
where
o if
{right
•
v > 1
8t~e of
A
A
(4.1.2) with 62 replaced by 8 if v<l
and where
...
+ v) + 8 2 In
B Z(e) • 81 In (1
(1 + v)
and I Z2(a) • (8 1 In"
(l -
v)
+ 8 2 In (1 - v»2 + 8l (ln (1 + v»2 +
9 (ln (1 - 'Y»2
2
4.8.
InfBNJ1C6s on the Shape Parameteze of pazteto Distztibution;
the
PorJer Function and A.S.N. Function of ths Test Being
Indspadent of the NuisaMe PaA:rmeteze
Let
Xl' X2 ' •••
function
be independent witb a common probability density
a,
a+1
g(x, a. It) • ak:lt
;
It> 0;
a> 0;
(4.8.1)
:It!. It
The Dull and the alternative hypotheses are
KO : a
·.0
vs.
HI: a • a 1 ;
(aO < a1 ) ;
,.
It. the maximum likelihood estimator of
•
ary sample
o<
k
P < 1
is
,
., x
(Xl"
na
&Do"/y
...
-
A
.0+ 1
a+1
S(x. a. k) • aklx
Pr{X <
Since
k} • [pr{x
Pr{X < 11y
pit
< It} •
unknown.
k is given by the preliminA
as It· ain(x1', ... x' ).
DO
)
1s to be suitably chosen.
(aDO) (pk)
It
:
for
<
Let
k· pk where
The probability density function 01
y ~ pk > O.
...
X? It.
Also.
ilk • y}dF(y), F being the c.d.f. of i.
0. we have
68
kl • I:pr1x < kli •
PrIX <
,
[I
y
~ k)dF(y)
a 8+1
auO 8DO+1
k{ k (ak Ix )dx} (anO(pk)
/y
)dy •
p
•
anO
/(DO+l). I(p, a), say.
'nlus for any preassigned positive e ; we bave,
1/ a n
max l(p, 8 t ) < r 1f p < min{e(n + I)} i O• Thus for small
i-O,1
i-O,l O
values of
~
and for such • choice of Pi we base our test procedure
upon thos. observations {xj }
1 • 0, 1,
j.
Thus
for which
g(x , ai'
j
k) 1s positive for
1, 2, •••
the test procedure 18
11
(HO) : -D < t {In (al/.O) + (a1 - .0)ln (k/xj )) < C
j-1
(H l )
(4.8.2)
Wr1tina Z(i) • In (aI/.0) + (a1 - .0) In (~/x), we have,
•
_ b(a -a)
h(a -a )
B exp(hZ(k» • (a 'a )h(k)
lOB X 0 1 exists for
1 O
1
..
a -· h
h > ./(.0 - a 1) and equals' {(al/ao) (k/k) 1 0 } (1 + b(a l - aO)/8)- •
Lemma 2 (31) is applied for b > a/(a - a ) and nonzero solution for
1
O
h • h(a, k, k) 1a obtained from
..
a -aO
(4.8.3)
h In «a1/ aO)(k/k) 1 ) - In (1 + h(.l - .0)/.) • 0
-
Since
..
b
depends upon
-
(It, k)
..
only through the ratio k/k and
since kIt is distributed independently of k:
the power function of the
test (aiven by (4.8.2» does not depend upon k. We . .y denote the
probability of accepting DO by L(a).
L(a) •
where
&nO [
-..
&noP
L(a, k,
Thus
-(8nO+l)
Lea, t, klk/k • ,),
dy
p
t)
ia as on the riaht side of (4.1.1).
69
(4.8.4)
Using f. 1n X • In k + a- 1 and E (In X)2 • (In k)2 + 2a- 2 +
2& -1 1n k, we obtain, after simplifying,
B Z(i) • In (a1/80) + (a l - .0)10 (k/k) - (a l - aO)/a and E Z2(k) •
(In (&1/80»
2
2'"
2
+ (al - aO) {(In (k/k»
...
-1
2(al - .0)(ln (&l/aO»(ln (k/k) - a
-1'"
-2
- 2& In (k/k) + 2a ) +
).
The forms of B Z(k) and E Z2(k) .nd the distributional form of
...
k/k imply that the A.S.N. function is also independent of k. We may
denote it by
E (nla).
E (nla) • anOp
Where
Thus
......
- (anO+1)
E (nl&, k, k, k/k • y)y
dy
anO [
..
B (nla, k. k)
(4.8.5)
P
A
ia determined from
(4.1.2)
by replaeina
....
92 by k.
4.9. Testing the ScaZe Pa..'t"ametl1l' of E:x:ponential Distnbution; the
PfMeft Function and the AveNge Sample NwrDeft Being Independsnt
of ths Location PaftamstefO
The probability density function of X 1s
-1
.
I(x, 91 , 8 2) • 8 2·exp{-(z - 81 )/8 2} :
z > 81 :
(4.9.1)
'2 > 0
and the hypotheses are
..
81 , the maxtaum likelihood estimator of 81 b.aed upon the initial
~Ulp1e
f '
A
}
i
'
'
'
'
is &iven by 81 • ain(xl , •• , zoo). Let 91 •
rto
81 - P where p > 0 1s to be SUitably chosen. The probability density
{Xl' •••
X
_
function of
81
4 ...
.
1108 21exp{-~ (y
1a
..
..1
Sex, 8 , 82) • 8 2 exp{-(x - 81)/8 2}
1
.
- 8
1 + p)/8 2) ; 1 > 8 1 - P and
"W
for
x> 81 ,
Pr{X~ 'I} • Jr-8 -pPriX ~ '11'1 • y}dF(y).
1
70
P beine the c.d.f. of 81 ,
Since prIx! yly!
prix
ell • 0 • we
have.
~ 81 l - [ Pr{X ~ elle1 - y > 81 ldP(y) 81
~{~821eXP(-(x
\
- 81)/9 2)dx l008;lexp(-no (y - 81 + P)/82)dy •
61
(nO + 1) -1exp(-nOp/6 2) · I(p. 82), aay.
max I(p. 821 ) < & by choosing
Given e
>
O. we can have
t-O,l
p >
-1
max {-nO 82i ln (e(l + nO»}
i-O,l
Thus for sull values of &;
..
we may baae our test upon those {xj
...
for
}
which both S(xj • 91 , 820) and 8(xj , 81 , 921) are positive; j . 1,2, •••
The teat procedure at the ntb stage of sequential sampling is
<Ha> :
-D
-1
<
n
t
..
{In (8 20'8 21 ) + d(xj - 81)}
j-l
-1
where d - (8 20 - 821),
...
distributed as Zj(9 1).
<
....
If, conditionally on 91 fixed,
j - 1, 2, •• , then
..
(4.9.2)
C : (Bl )
Zeal)
1a
h'"
I exp(hZ(8l » • (°20/&21) exp(-dh8 1)E exp(dbJ) exists for dh8 2
h
..
and equals (8 20 /8 21 ) {exp -dh(8 l - el)}/(l - dh9 2). Lemma 2 (31) is
<
1
..
applicable for dh8 2 < 1 aDd h - h(e 2 , 8 , 81) is given by
... 1
h In (8 20 '9 21 ) - In (1 - dhS 2) - dh(Sl - 81) • 0
Since h depends upon
..
(4.9.3)
(8 1 , 91) only through the difference
...
81 - 91 and since the distribution of 8 - 81 does not involve 81 ,
1
the operating characteristic (to be denoted by L(a 2» is independent of
81 and is obtained from
71
where L(8 • e , 81) is siven by the right side of (4.1.1).
2 1
222
Also, since EX· a1 + 82 , EX· (a l + 02) + 82 • we obtain.
after some readjust.ents.
E Z(8 1) • In (&20/821) - d(8 1 - 81) + d8 2 and I Z2(8 1) •
.
2
2-
2-
(In (8 20 /°21) + d8 2) + d (8 1 - 81 - 82) - 2(8 1 - 81)d 1n (8 20 /8 21 ),
The right sides of above two equations and the d1etributiona1 fOrlll
.
of 81 - 81 show that the A.S.N. function is also independent of 91 ,
. This is determined from
-1
~
8l , 81 - 81 • y)exp(-D002 (y + p»dy
(4.9.5)
where E (01°2,
rep lacing
.
e2
81 ,
.
81)
is obtained fro. the right side of (4.1.2) by
by 8 r
4.10. Nunerica't
RBBU'ttS
For studying the properties of the test. presented in this chapter.
we are to evaluate quantities of the form E fey)
where the expectation
1. taken .over the clbtributioD of the random variable
Y.
Whenever the
exact evaluation of such expectatioDs presents analytical difficulties, a
very close approxblation aay be obtained by eva!uatina E fCY) in the fora
t
i 2 fey + tG)p(y + 18)
(4.10.1)
r-YI
(p(y)
1
being the p.d.f. of Y);
and 0" (0 ~
curve p(y)
Y2
t t
e
<
1)
sUlllled at iDtervala of length 1 where
are chosen suitably according to the shape of the
and the truncation points Y1 and Y2 are chosen Buch that
p(y + 10)
yay1
is extremely close to one.
72
If for each fixed y, fey) i.
not very large. it would seem reasonable
th~t
(4.10.1) gives a close fit
to the desired expectation.
Notation: (Table 4.10.7) OC(nO'p) • OC when k is estimated by
......
k • pk, k being the m.l.e. of k given by the initial sample of size DO.
-,.
...
(Table 4.10.9) OC(nO' p) • OC when 81 is estimated by 81 • 81 - p, 81
being the m.1.e. of ~ obtained from the initial sample of eize nO.
-
A.S.N. (nO' p)
of Tables 4.10.8 and 4.10.10 have analogous meanings.
the notation of sections 4.8-4.9, values of l(p, a)
(a O' all • (1, 2) • (620 , 8 21 )
~
ad I(p, 82)
In
for
are as below:
SO
25
15
p
l(p,-o>
l(p,a1>
l(p,aO)
l(p,al )
l(p,aO)
I(p,a l )
l-l/nO
.00714
.00260
.01386
.00499
.02220
.00789
1-1/1.3n
O
.00903
.00416
.01761
.00806
.02837
.01288
~
15
25
40
p
l(p,a 20)
l(p,8
2/n
O
.00330
1.5/n
O
.00544
)
l(p,a 20)
I(p,a 21)
l(p,a 20)
l(p,a 21)
.00897
.00520
.01415
.00846
.02299
.01152
.00858
.01817
.01394
.00957
2l
Probability of accepting B and A.S.N. function of Tables 4.10.1 O
4.10.2 for unknown A were caiculated frca (4.2.5) and (4.2.8) reapective!y using the distribution (4.2.9) whereas those of Tables 4.10.3 4.10.4 were obtained froa (4.3.4) and (4.3.7) respectively usln& the
distribution (4.3.8).
OC and A.S.N. functions of Table 4.10.5 for unknown
&1 were obtained fro. the formulae (4.4.8) and (4.4.12) respectively.
The results on the OC function aarea with those of Chapter II (cases
73
of unadjusted decision boundaries).
for the r.nme choice of p
(~s
Tables 4.10.7 and 4.10.9 show that
a suitable function of nO)
a smaller nO
will tend to increase both kinds of error probabilities (e.g. in addition
to Table 4.10.7, OC(15, 14/151a • 1) • .9371 and OC(15, 14/151a • 2) •
.0661)
though according to Tables 4.10.8 and 4.10.10, the A.S.N. 1s only
slightly affected by a change in nO.
A right choice of p appears to
be 1 - 11aOnO in case of Pareto distribution and
exponential distribution.
74
821/nO in case of
Probability of Accepting the Null Hypotheses (RO : v • 1.5)
Against the Alternative (Hl : v • 2.5) for Testing
the Parameter v of the Gamma Distribution
(m • 0.05 • 8.
DO· 50)
!able 4.J..0.3
H)~othe8e.
(8 : ~ '• • 1) Against
0
the Alternative (H1 : 1 • 1.1) for Testing the Parameter 1 of
the GaIIIIIa Di8tribution (v • 1, ftO • SO, CJ· 0.05 • ~)
Probability of Accepting the Null
h
(~UI'lbown)
0.05
(v-kDown)
0.9961
0.9457
0.644
0.403
0.326
0.269
0.192
0.113
0.1
0.2
0.3
0.35
0.4
0.5
0.7
0.9
1.1
1.3
(v-kI'lOWI'l)
0.9984
0.95
0.671
0.423
0.34
0.277
0.193
0.109
0.0705
0.05
0.0376
0.0765
0.054
0.0434
2.1884~
1
0.211255
·0.105111
·0.22576
·0.32615
·0.48662
·0.71444
·0.8757
·1
·1.10088
'tabD 4.19.4
A.S.N. Puaction for TestiUl the Parameter 1 of the 0&-.. Distribution
(80 :
A • 1 va.
H1 : 1 • 2;
A.S.R.
(v-unbowl'l)
0.5
0.75
0.875
1
1.125
1.25
1.3125
1.375
1.4375
1.5
1.625
1.15
2
2.125
2.25
2.5
~.
{v· known)
7.28
11.2
13.81
16.05
17.23
11.94
18.21
6.64
9.2
11.2
13.72
16.36
18.14
18.38
18.14
17.48
16.52
14.23
12.01
8.64
7.46
7.53
5.18
17.67
16.89
15.41
13.64,
11.98
8.67
7.49
6.7
5.43
76
50.
G·
0.05 • B)
h
(v-known)
3.69
2
1.447811
1
0.62516
0.30395
0.1593
0.02364
·0.10403
·0.22455
·0 ..... 689
·0.6.. 803
·1
-1.15571
-1.30036
·1.56184
Table 4.10.5
Probability of Accepting the Null Hypothesis and the A.S.N. Function
for Testing the Scale Parameter 82 of Laplace Distribution
(H : 8 .. 1.5 va. H : 8 • 4.5; "0 • 50. u • 0.05 • 8)
1
2
O
2
8
pr{H O 18 aeeePtedle~
2
1.2
1.5
1.8
1.95
2.1
2.25
2.4
(8 Cunknowu)
0.9886
0.9489
0.8S4
0.784
0.702
0.&15
0.529
(el-know)
0.9888
0.95
0.8S8
0.19
0.711.
0.625
0.54
77
A.S.N.
(el-unknown) (el-known)
5.11J
6.21
7.17
1.48
7.62
7.57
7.49
5.09
6.13
7.06
7.31
7.51
7.48
7.31
e
e
e
Tule
4.10.6
Probability of Acceptilll the 1Iull Hypotbuia and the A. S.lf. Function for
'araMtft 1
1
....
•
1.5
1.15
2
2.25
2.375
2.5
2.625
2.15
2.875
3
3.25
3.5
3.15
4
the
of the Inverae Gaua8ian Diatribution
(110 : 1 • 2 va. Hl : " • 3.S,
Pr{B 18 accept.dll}
O
( ll-1IDkDowft)
0.9907
0.988
0.937
0.802
0.701
0.585
0.517
0.402
0.303
0.219
0.108
0.063
0.0376
0.0265
r •• ting
.(lI-lmon)
0.9985
0.9894
0.95
0.833
0.734
0.614
0.487
0.368
0.27
0.19"
0.0977
0.05
0.0265
0.0147
nO • SO,
O.OS • 8),
~
• 1
A.S.N.
h
( ).I-Itnowu)
( \H&IlkDowu)
24.93
28.7
36.47
45.71
S2.13
55.78
53.4
49.9"
45.52
40.88
32.55
2'.23
21.75
18.19
Q •
i,
24.65
31.22
40.44
50.67
54.3,9
56.01
55.16
52.23
48.03
43.36
3".64
21.84
22.86
19.21
( \l-la:lcRm)
2.2057
1.54074
1
0.54646
0.34lt86
0.15115
-0.01836
-0.18307
-0.33816
-0."8465
-0.75511
-1
-1.22354
1.42901
.
i
,
~
e
e
e
Tabl.
4.10.7
Probability of Acceptinl the aull Bypothea1s (110: a • 1) Aaaill8t the Alternative
for Te.tins the Shape Parameter of Pareto Diatribut10n (u ft 0.05 • 8)
a
00(50,49/50)
OC(SO.64/65)
OC(2S.24/2S)
.....
\C
OC(k-known)
h(k-kuown)
a
0.5
0.9989
0.99''''
0.9975
0.9996
2.6599
1.375
OC(SO,49/S0)O.619
OC(50,64/6S) 0.603
OC(2S.24/2S)0.637
OC(k.-kDown) 0.6
h(k-known) 0.13758
0.75
0.9921
0.9902
0.9889
1
0.9481
0.9461f
0.9937
0.95
1
1.7189
1.5
0.439
0.425
0.466
0.417
~O.11313
0.9"'~9
1.625
0.282
0.271
0.307
0.263
-0.35044
1.125
0.886
0.878
0.886
0.884
0.69022
1.75
0.169
0.161
0.187
0.155
-0.57634
1.25
0.776
0.76'"
0.784
0.767
0.40424
2
0.0552
0.0512
0.0622
0.05
-1
(H1 :
& •
2)
1.3125
0.702
0.688
0.716
0.688
0.26873
2.25
0.0182
0.0172
0.0205
0.0162
-1. 3937
2.5
0.0063
0.0059
0.0071
0.0055
1.7638
-
e
e
e
Table 4.10.8
A.S ••• Function for Testinl the Shape Parameter of Pareto Diatribut10a
(KO : a • 1 va. t;. : a • 2; a. • 0.05 • 8)
co
0
a
ASN(SO.49/50)
ASN(SO,64/65)
ASN(25,24/25)
ASN(k-known)
0.5
2.29
2.3
2.32
2.25
0.75
4.58
4.61
4.61
4.54
1
8.62
8.68
8.57
8.64
1.125
11.49
11.56
11.36
11.56
a
ASN(SO,49/SO)
ASN(SO,64/6S)
ASN(25,24/25)
ASN(k-known)
1.5
18.64
18.43
18.91
18.35
-1.625
18.1J9
18.18
19.01
17.91
1.75
17.34
17
18.09
16.1
2
- 14.34
14.06
15.09
13.72
1.25
14.63
14.65
14.41
14.7
2.25
11.97
11.75
12.58
11.45
1.3125
16.08
16.06
15.96
16.11
2.5
10.36
10.19
10.86
9.93
1.375
17.29
17.21
17.26
17.24
-
e
e
Table 4.10.9
Probability of Accepting the Null Hypothesis (HO : 8 2 • 1) Agai118t the Alternative
(~ : 8 • 2)
for lea tina the Seale Parameter of the
2
ExponeDtial Distribution (a· 0.05 • 8)
82
00(40 •• 05)
0.75
0.9962
OC(40, .0375) 0.9964
OC(2S, .08) 0.9959
OC(25 •• 06)
c»
~
1
0.9_34
0.9466
0.9383
OC(lS,2/1S)
0.9961
0.9952
0.944
0.9335
OC(15, .1)
0.9957
0.9383
OC(8rmown)
h(el-known)
0.9912
2
0.95
1
62
1.4375
00(40, .05) 0.401
OC(40, .0375) 0.42
OC(25 •• 08)
OC(25, .06)
0.387
0.417
00(15,2/15)
0.36
OC(l5, .1)
0.408
OC(el-mown) 0.424
h(el-known) -0.10403
1.5
0.322
0.34
0.312
0.339
0.292
0.336
0.34
-0.22455
1.125
0.847
0.856
0.835
0.849
0.808
0.836
0.863
0.62516
1.625
0.203
0.217
0.199
0.219
0.192
0.226
0.211
-0.44689
1.1875
0.773
0.785
0.757
0.778
0.723
0.761
0.794
0.45876
1.75
0.127
0.137
0.127
0.142
0.129
0.151
0.129
-0.64803
1.25
0.648
0.7
0.666
0.692
0.628
0.674
0.71
0.30394
1.3125
0.588
0.606
0.569
0.599
0.531
0.581
0.615
0.1593
2
2.25
0.0523
0.0243
0.0265
0.0567
0.0558
0.057
0.0637
0.067
0.05
-1
0.0277
0.0312
0.035
0.0381
0.0213
-1.3004
1.375
0.491
0.51
0.478
0.504
0.44
0.491
0.517
0.02364
2.5
0.0126
0.0141
0.0153
0.0173
0.0228
0.026
1).01
-1.5618
e
e
e
Table 4.l0.lQ
A.S.R. Fuuct101l for 'l'.. t11l1 the Scal. Par....t.r of the hpOllentla1 D1atr1butiOG
(BQ :
'2
O.7S
9.68
ASKC40, .0375) 9.49
ASN(25, .08) 10
ASN(2S, .06)
9.16
.&SU(15.2/15) 10.51
ASN(lS, .1)
10.16
ASHe e1-bow) 9.2
'2
1.4315
AS!1(40, .OS) 17.56
ASN(40. .0375) 17 .43
AS!I(2S •• 08) 17.55
AS1H25 ••06) 17.37
ASN(lS,2/1S) 17.41
AS1C(lS. .1)
17.2
ASN(81- knovn) 17.48
ASN(40••05)
01
N
1
14.5
14.11
15.011
14.39
16.14
14.96
13.72
1.5
16.5
16.45
16.42
16.36
16.18
16.1"
16.52
'2 • 1 vw.
H1 : '2 • 2; a· 0.05 • 8)
1.125
17.11
16.72
17.73
16.98
18.81
17 .. 51
16.36
1.625
14 .. 12
14.18
1_
14.09
13.71
13.87
14.23
1.1875
18.17
17.73
18.65
17.916
19.55
18.36
17.43
1.75
11.91
12
11.81
11.95
11.57
11.8
12.01
1.25
18.73
18.34
19.1
18.4&
19.73
1&.75
18.14
2
8.61
8.7
8.58
8.72
8.48
8.69
8.64
1.3125
18.79
18.liS
19.02
18.54
19.34
18.63
18.38
2.25
6.55
6.62
6.56
6.66
6.54
6.7
6.53
1.315
18.37
18.15
1R.49
18.14
18.52
18.07
18.14
2.5
5.23
5.27
5.25
5.33
5.27
5.39
5.18
CHAPTER V
SEQUENTIAL TESTS FOR ANALYSIS OF VARIANCE' UNDER
RANDOM AND MIXED K>DELS
5.1.
Introduction
Sequential, methods for snalysis of variance under fixed and random
effects models have been developed by Johnson «(17] and (18]).
Procedures
for discimlnating between two values for the ratio of variance components
in a single one way classification have been discussed in [18].
al [15]
Hall et
and Ghosh [10] have studied the problem through the principle of
i·wariance.
Our method reduces the problem from the sequential F-test to
a sequential X2-test and in most cases our tests reduce to the form in
which their propert1..es c'an be studied by standard methods unlike the
sequential F-test where it 18 possible only to make a conjectural study
(as in [2], [81, [lO].and [24]) of the A.S.N. function when either the
null or the alternative hypothesis is true.
The conditional tests have
been developed through the invariance principle except the alternative
procedures of sections S.S and 5.6 in which cases the power functions
coincide with those of the corresponding invariant tests.
5. 2.
A Ce1'lem Z. P%'Ob tern
The assumed model is the general linear mixed model
(S.2.1)
where
(i) ~ixt • (xl' •• , x t )
(ii) ~ixt
population;
•
(1, I,
are
t
.. ,
1) ;
observations available from a
(iii) ~l' xs
• (Y1' •• , y)
s
are
independent random variables acting upon the population, affecting
Yj ... N(O,
O~)
components,
j . 1, 2, •• , s;
2
(iv) .!i,xt • (zl' • Of Zt)
all i, {Zt}
Zt'" N(O, ( )
0
are error
are mutually independent and
independent of {Y };
(v) {cij } are t. known constants specified by
the design of experiment. (vi) ~ is an unknown parameter.
j
'l'be assumption
+ 1 i8 required of p(l! p.5..) is tbe
t > p
number of distinct OJ'••
e.
The parameter set is
{ II, 0
0, aj;
j . 1, ...
-1-
(ll, 0 0 , 0 l' .. ,
< II < •
°s)
£
e•
• 0 0 > 0, 0 j ~ 0;
j. 1J
•• ,
sl-
The problem is to teat, for a fixed j,
2 2
2 2
HO : 0 j/oO • ~O vs. Hl : j/oO • 41
(5.2.2)
where 0 ~ 60 < 6 1 are specified real numbers. Tbe hypotheses are
2
2
2
2
2
equivalent to
t'
• 0 (1 + ciS );
i · 0, 1 where l' • cO + 0 0 and
0
i
j
c 1s a nonzero real number. t~e have an es timator s 2 of a ~ froa a
°
Hi:
first stage sample of appropriate 8ize (to be deteratned in special cases)
and ve write the problem as
2
Ii : T • .2(1 + cai);
(5.2.3)
i · 0. 1
S. 3. A GensNl PztoceduN for !Jeri fJ'lng an Invariant S. P. R. !'.
We consider a sequential form of experiment in which we draw
{n~
- nv-l}
observations at the
vtb
stagewbere nO· O.
The total
{Xl' •• , xn }. The test of (5.2.3) will be based
v
2
on a sequence Iv· 8v(x1 , •• , xn ) whose density depends only on ~ •
.
v 2
We note that for every fixed S J the pr?hlem (5.2.3) remains
data at stage v
is
invariant under the group
G of
one-to-on~l J~cat1on
84
transformations on
the sample space of
Xi
{xi}
+ a for some a. -.
such
tl.·.lt
every g
~
G transforms each xi
to
< 8 < ~ •
We assume that
(C)
for every y
>
-
1. there exists a statistic t y • t y (xl' •• , xn )
y
which is sufficient for the underlying parameter set and 1s such that
{tv}
is a transitive sequence, that is, the conditional density of
tV+l
(xl' •• , xn ) ie identical with with conditional density of tV+I
.
Y
given t y , for each value of the parameter set.
liven
Thus the results in [15] (pp. 583-84) imply that if
Iv (Xl'
•• , xo )
Iv·
is a uxiul invariant under the group induced by G on
v
{~}
the 8811ple space of {ty }' then
transitive sequence with some
T
is an invariantly sufficient snd
2 which is a aaximal invariant for the
Iroup induced by G on the parameter space.
Thus an invariant test of
(5.2.3) viii be based upon the sequence {Iv}.
{tv}' the joint density of (gl' •• ,
Py(gl' •• ,
Where
Iv) factorizes into
8v1~2) • fy(avIT2>hv(81'
fu(8v1~2)
Alao, by transitivity of
··t
Iv), for every v ~
i . the density function of
(5.2.3) based upon the observations {Iu}
Cu.
1
Thus any S.P.R.T. of
will be specified by decision
rule. of the form:
(liO) :
1<
C\,(av) • !y<lyl.2 U + d 1»/fy (lvl. 2 u + dO»
< A :
dill
(5.3.1)
Continuation amount. to taking a further sample of nV+l - Dv
observations. We shall take A· (l - 6)/0, :B. 6/U - 0) (we may, at
the cost of larger .ample size. take A. a-I,
s~nl1ng
each stage.
a;
to reduce
We shall consider the case Dv • vI,
probabilities of wrong decisions).
that is.
B·
which takes a constant number N of observations at
In special cases.
~
shall verify
85
(C)
and find out the
sequence . (I) •
5.4.
Definitil.m of Sequential, x8-Teat
'We suppose that
Ivlt 2
follows, for every v
~
2
1, a central X
distribution with q degrees of freedoawhere q is an integral multiple
of v.
q.", say,
2
Thus
f
1
Gv (~) • (1
being a positive integer.
k
v (Iv 11' ) • •
+
+
_~/2T2 9-1
a:
1
cc5 0
Iv
~..) exp{-~l
1
2.
S
g
I (2T 2)"1' (2). Consequently
1
1
+ C4S .. 1 + c6 )}. Thus the decision rule
1
0
of our test is
~BO)
: In B <
1 + cc5 0
~ In i + ~ ~..
&y
1
2s2 (1 + c4
1
.. 1
l
+ c~ 0)
< In A : (B l )
In all practical cases we ahall have c > 0 and without
generality we may asaume
~1
> 6
0
•
10S8 of
In G (~) .(aa a function of
v
Thus
.
1 + c4 0
Iv) strictly increases from In Gv(O) - lIn 1 + cOl to
1n Gv(~) - • •
We ..y thus express the decision rule of the test as
(80 ) :
.Iv
<
Iv
< ~ : (H )
1
wbere
_
Iv •
(5.4.1)
l
+
c~O].
1
1
~. (In B - ~ In 1 + C0 )!28 2 (1 + coO .. 1 + COl) and
1
1 +
(In A .. , In 1 +
ca o
1
1
1
c~ ~/--j'<t" + ·c6 - 1 + cIS )
1 28
0
1
If c 1.. independent of v, we can write
1
1+c6»)
o
(5.4.2)
2 2
where Vits are independent and are distributed as T Xk(O) for each
v
i • I, •• , v. Thus (5.4.2) may be written as E Zi where, conditionally
i-I
86
on s2 fixed. Zi'8 are i.i.d.
We shall find expressions for the operating characteristic and A.S.H.
2 and E (v IT 2 • (0)
2 respectively).
functions (to be denoted by L(T 2 , (0)
We see that. for 8 2 fixed, E ehZ eXists for
and equals
are satisfied for these values of h.
2 2
h(T laO' t) may be Obtained fra.
1
(1
+
+
ca o h
c4)
1
hT2
• 1
1
+ 2(1 + c c\.
tao
""1.
2
2
Thus h· h(T • t, (0) •
1
- 1
(5.4.3)
+ C5 )
0
With h determined from (5.4.3), we have. 8ssumins excess of
In Gv(Sv)
over the decision boundaries is negligible,
Lh 2 • ( 02 ) • E L(T 2 /0 02 ,
where
t)
h
b
h
. L(T2Ia~, t) • (A - l)/(A - B )
for E Z ~ 0
• 1n AI(ln A - In B)
for E Z • 0
(5.4.4)
where E Z • B (Zit).
Also, UDder the . . . aasumptlo;'"lt
E (\11",2, a~) • I E ("IT2/a~. t) where
B(\lIT2/0~, t)
.0 {In leAh • -(In
1)
+ (1 - Bh )
h - Bh)E Z
1n IJI(A
A In I)/E Z2 for E Z • 0
for I Z ~ 0
(5.4.5)
where
(5.4.6)
and
87
___
1_\
1 + c6 '
0
x2-Test
foT' Te..~-;;in~l !Jypo:hescl3 About the
Ratio of Val'ianaes of Two !/()i"":aZ ropuLations
5.5.
App'ticai1:on
Let
and
III
or
the
n2 be two normal populations with variances '[2 and
o~ respectively, We want to discriminate between Hi : T2/o~ =
1 - 0, 1.
A first stage sample
{x
i2 , i • 1, 2, ,., nO}
x-
determines
Corresponding to (5.2.3), we have,
sample.
(6
0
from TI
6 ,
i
2
being the mean of this
2
Hi
T
2
•
8
2
ei
,
i·
0, 1;
< 6 ),
1
Procedure I:
Start by taking sample of two
(XII' x 21 )
from fi l ,
At each subsequent stage (if required) a further individual is taken
from III'
From the joint density function of
•
t n_1
-
n
(x1 (n), i:l(Xil -
sufficient sequence for
-
2
x1(n»
2
{XiI' i • I, '"
x1 (n)
) (where
a
n
-1 n
E XiI) forms a
i-I
The
l is the mean of fi 1 ,
G of location trannfcrmations leaves the problem invariant with
group
as a maximal invariant on
(,11,
T
)
where
0::z {U 1 '
induced transformation (P1 + a, T).
(x1 (n) + a,
tion
TI-<»
lJ
< lJ
1
the group
n
2
L (XiI - Xl (n» )
<
co
,
-
-0-1 -
on the sample space of
i=1
_
2
t (x11 - x1 (n»
as a maximal invariaIlt.
1.=1
_riting
n x1(n + 1) • n + 1xl(n) +
1
-
88
T > O}
tl
T
under the
G induces the transforma-
n
2
n}, we see that
+ lXn+l' 1
t
n- 1 with
2
~l
2
I (x~l - xl(n + 1»
and
i .. 1
~
n
..
2
t (x,! - x1(n»
J .... 1
n
2
+ --~(xn+l
1- xJ(n» •
(n + 1)
1
40
we have, from Theorem 4.3 of [15] that
{tn_I}
•
•
is a transit1\y e sequence.
Thus invariant S.P.R.T. is given by the rule
~
2
2
(HO) : B < fn_l(8n_lITw .. s 61)/fn_l(gn_lIT •
Since
g
11-
60 ) < A
(HI)
1 - T2X2n- 1(0), the decision rule may be expres~ed as
(H O) : oWn2
1
where
2
S
gn- 1 <
<
gn- 1:
n - 1
g 1" (In B ~-
2
(5.5.1)
(HI)
60
1 1
1
In - ) / - ( - - - )
2
61 28 00
01
and
80
1 1
1
(1n A - n - 1 In - ) / - ( - - - ) .
2
61
2s2 eO
01
In the notation of section (5.4)
0-1
n-1 1
80
Vi 1
I
In G l(g 1)· t Zi· r [-2 1n -- - --2(- - - ) ]
nn1-1
i=l
61
2ta 8 1
60
2
2
t · s leO
where
Thus
eO h
(-)
and Vi
= h(T 2 ,
b
is distributed as
T
2 2
X1(0)
for each
i.
2
00' t) is siven by
bT 2 1
1
ta~ 81
eo
• 1 + - ( - - -)
61
Since
t
(5.5.3)
2
is distributed independently of
only through the ratio
t
2
/0 02 (h
operating characteristic will depend upon
A similar conclusion will follow for !.S.N.
00
and
2 2
= h(. 1°0,
(.
2
, 0
2
0)
2
E (\) lor I 0 ~}
b
t)
t
depends upon
say), the
only through
We shall,
from notation of section 5.4 and shall denote them by
sections).
(5.5.2)
o
0
T
tberefore~
have~
89
2
laO'
deviate
and
respectively in this section (and in all the following
flo, we
2
o-3) 12 exp
(n
t)t
-1)/2)t dt
O
-{(n
(5.5.4)
where
E
(vl~
2
1
2
laO'
t)
60
1s determined from (5.4.5) with
T
2
1
1
(5.5.6)
E Z • - In - - - 2( - - - )
2
61
2a t 8 1
eO
o
and
e
al
E z2 ... l(ln.J!)
4
Procedure II:
2
+ 3T 4 (-1.. _ .1...) 2
4a~t2 61
call it
2
At each stage take a sample of
Calculate
gr·
_
1
1
61 2to~ 61
eO
r
from n1 •
9
0
_ (In _)...L-(
2
t (xiI - xl)
_
(xl· r
~l
r
)
(5.5.7)
60
(~2)
-1 r
t xi)
individuals
and
~1
r, j if based upon sample at the jth stage.
At the nth stage, the test procedure to be followed is:
g
(HO) : B <
where
pCg
r,
n
n [peg
j=l
j1T 2)
r,
j
IT2i l l S 2el'/peg
r,
jlT
= {(2t2)-<r-l)/2g(I-j3)/2exp
r,
2
= S 2eO)]
< A : (HI)
-(g j/2T 2)}/f«r - 1)/2)
r,
After simplifying, the decision rule roay be written as
(5.5.1) II
and
-gn •
An expression corresponding to the right side of (5.4.2) is
90 .
(5.5.2) II
A comparison between (5.4.2) and (5.5.2) II implies that formulae
2
giving 10" h(T 2laO'
t),
L(-r 21(20 )
the same as for Procedure 1 where
and E
122
1(0)
(v - n here)
(\I T
2 2
t · s laO
are
and with
(5.5.6) II
and
Procedure III:
Start by taking sample of two
(xlI' x2l )
from nl •
At each subsequent stage (1f required) one further individual is chosen
from B •
l
At the (n-l)st stage, c.alcUlate
8n-1 • (xII + x 21 + ••. + xn-l,l - (n - 1)Xn ,l)2 /n (n - 1)
follow the decision rule
where ~-l
and
8n-l
are the same as for Procedure I.
91
and
Since corresponding to (5.4.2), we have,
n
n
00
t Zj '" t [In -- j .. 2
j~2
a1
S"_l 1
1
(_. - -)],
2
28
til
eO
a comparison implies that the
2
It 210 0)'
E. (v
(5.5.2) III
-.1..
fo~ulae
,
giving
2 2
h(t 100' t),
are exactly the same as for Procedure
E (Z t)
I.
Similar procedures may be developed by interchanging ill
5.6.
and il •
2
OM flay Classification
We consider a one way classification by groups and denote the
internal (within group) variance by o~
2
x
denotes the jth observation (j. 1, •• , uV in
ij
(i" 1, •. , n) in a randomly chosen set of n > 1 groups,
variance by au.
tile ith group
and the external (between group)
If
then
x
ij
• P
+
u
i
+
(1· 1, •• , n.
Zij
j'" 1, .. , m)
(5.6.0)
We have a special case of (5.2.1) where· {c } have values
1j
t •
KO
nm, s • n, p • 1.
2
2
au'" 6 0°0 ve.
A problem corresponding to (5.2.2) is
HI
2
2
{x~j;
i · I, •• , nO;
au· .s10 0·
from a preliminary sample
nO(mO - l)t
is distributed as
If
s
2
2
-, 0
2'
+
0
•
2
j •
where
Xno(mo-l) (0)
corresponding to (5.2.3), we have, -Hi:'c 2 •
T
I,
0 and
8
1, •• , mOl, then
2 2
t · s laO. Thus
2 (l + 116 i) ,
i • 0, 1 with
2
~u.
There can be two possible kinds of sampling schemes:
stage selecting (at
random~
taking a fixed number
m of
a further group (or a set of
ob~ervntion8
92
from each group.
(a) At each
r
gIoUpS) and
(b) n
>
1
g~oups
may be selected at once and
set of m
observ~tions)
fo~
be drawn from
all, and an
ca~h
obse~vation
(or a
at successive stages.
Procedures corresponding to II aud III 0f section 5.5 are not applicable
in scheme (b) aa the successive set!:: of observations are not independent
of each other.
Scheme (a):
(To be preferred if availability of observations inside
a group is limited.)
Procedure I:
each group.
chosen and
Start by taking two groups and m observations in
At each subsequent stage (if required) one further group 1s
m observations taken in it.
{Xij ; i · 1, •• t n; j . 1, •• , a}.
'For s 2 fixed, the joint density of {x ; i . 1, •• , nj j a 1,
ij
m} may be written as
Total data at stage (n - 1) is
"t
f({x ij }t ~.
t
2
,
2
S ) •
(l2i)-nm(s2)-n(m-1)/2(t2)-n/2exp{_(V/2s2) _ (U + nm(x _ ~)2)/2t2}
n
II
_
2
where V · !
I (X
- xi) ,
ij
i"'l j-1
n
and i '"
n
- 2
U • m I (Xi
- x) ,
i-I
til
t ! xij/nm.
i-I j-1
Thus the set of sufficient statistics for
(i, U) (to be denoted by (x(n), U(n»
group
G of
tran8for~Ulations
e-
(~, T)
is
t n-1 -
to indicate (n-l)st stage).
on the aample space of
{x
ij
}
The
involving
any cammon change of location in {xij } leaves the problem invariant
with t 2 as a maximal invariant on e· {~t
< ~ < ., t > O} under
TI--
the induced transformation
transformation
(x + a,
n
In_l(m)
=U •
U)
_
(~+
a) T).
The group
on the sample space of
_ 2
m I (Xi - x)
1-1
93
G induces the
{t - }
n l
with
as a maximal invariant.
t
n-
It remains to estahlish the transitivity of
e·
l' i.e., to shaw that for every
of
t
n
given {x
-(i(n+l),U(n.f.l»
depends on {x
ij
(u, t), the conditional density
} only through
ij
and
x(n}
i-1., •• ,n;
'
U{n).
j - l , •• ,m}
Writing
II
x(n + 1) • nx(n)/(n + 1) + E xn+1' j!m(n + 1) and U(n + 1) •
j-l
II
2
UCn) + nm E (xn+1' jIm - x(n» I(n + 1) and using Theorem 4.3 in [15],
j-l
n } 1s a transitive sequence.
Thus the decision rule at stage (n-l) w111 be
we see that
•
{t
-
(5.6.1)
(HO) : .in-I < 8 n- 1 em) < 8 n-1 : (HI)
n - 1
where .in-I· (In B-
2
1 + moO
In 1 +
m(
1
1
1
n - 1
1 + m6 0
1
1
gn-1 • (In A 2
In 1 + 1D6 ) '''2( 1 + 1110
1 2s
0
0-1
0-1
1 + m6 0 Vi
I
t Z - t [1 1n _ _
+
1-1 1
1-1 2
1 + m6 1
2s 2 I + m6 0
-c.
2 2
where Vi· T x1(O)
With
for each
1
1 + m6 )
1
)'2S 2(1 + moO
and
1
1 + 1110 ).
1
1
1
+ m6 1
)]
(5.6.2)
i.
b determined froll
1
1 + m6)'
o
where L(, 2/0 2 , t)
0
is obtained from (5.4.4)
94
2 2
t · s 100 ;
Bnd with
v· n - 1
(5.6.3)
I
(\1 T
2 2
-not(mO-l)/2 (nO(mO-1)/2)-1
loOp t)e
t
dt
(5.6.5)
as in (5.4.5) with
(5.6.6)
2
1
E Z
1
• -4(.1n 1
+ m50 2 +
+ m6 )
1
3T 4 (
2 4 1
4t
0
o
1
+ mu~1
_
--!._) 2
1
~
+ muO
1 + .60
1
1
~(ln 1 + m15 ) (1 + mIi.. - I + mIS )
2ta
I
-1
0
O
'[2
Procedure II:
(5.6.7)
At each stage take a sample of r
groups and take
r
m observations in each group.
it
g
r,
Calculate
g • m I (x - x)2
r
tal t
and call
if based upon the observations at the jth stage.
j
.
At the nth stage, the test procedure is
B
<
n
n [peg
j-1
22
1T 2• 2
S (lfmol),/p(g
j 1~ • s (1+m6 )) < A : (HI)
0
r,
r,
j
I 2
2 -(r-l)/2 (r-3)/2
2
where pegr, j IT ) • {(2T )
g r, j
exp -(8 r, j/2T) l/r«r-I)/2)
After simplifying, we may write the decision rule as
2
-n
<
n
t 8
j-l r,j
<
in
(5.6.1.) II
nCr - 1)
I + m6 0 1
1
2
In 1 + m6 l!-:2(l + ml5
1 28
0
1)
1
+
m6
1
1
1
Q)1-(
{In A _ n ( r In
.. _. _
) Also, since
2
1 + mOl 2s 2 1 + m6
1 + mlS •
0
l
where
.In"
{In B -
95
-8n •
n
t Z
jal j
tbe formulae giving
h,
L('r2/cr~) and
E
(\llt2/0~)
(with
are
v . u)
the same os for Procedure 1 with
(5.6.6) II
1
1 + ~ )
2
(5.6.7) II
o
Procedure III:
each group.
Start by taking two groups and
m observations in
At each subsequent stage (if required) one further group is
chosen and
At the (n-1}st stage calculate
m observations taken in it.
the decision rule
B
22
n
n [P(gj-1 11 • S (1+m5
<
j-2
P(8 _
where
j
l
IT 2)
•
1»!p(Sj_l
1T 2• 2
S (1+m6 })] < A : (H )
0
l
2 Tn
8-1/2
_ exp(-&j_ /2T )/v2Tr(1/2).
1
j l
After simplifying,
the decision rule is
(5.6.1) III
where
.!n-l
and
g l a r e the same
n-
8S
In the notation
for Procedure 1.
of section 5.4,
n
1
n
+
fIl6
0
8j _
1
1
j:2Zj "" In Gn _ 1 (Sn-l) • j:2 [1n 1 + m<\ - 292 (1 + m6
1
1
- 1 +
moo»)
(5.6.2) III
where
8 _
j
l
1s distributed as
1
2 2
96
Xl(O).
Thus the formulae giving
h,
as for Procedure 1.
Scheme (b):
(To be preferred when the cost or waiting time for
drawing a group is much higher than that of' a single observation.)
A fixed
Procedure:
n~her
n > 1 of groups 1s selected in the
beginning and an observation is drawn from each at each stage.
n
proceed as in Scheme (a) (Procedure I), we find that
If we
_
_ 2
U - m t (Xi - x)
i-1
(which will now be denoted by
sample space of
to
(~,
tm •
8m(n»
is a maximal invariant . on the
U) (which will now be denoted by (xCm), U(m»
indicate mth stage) under the group
C of location transformations.
To show the transitivity of the sequence {tm}, we write
xCm + 1)
mx(m)!(m + 1) + vl!n(m + 1),
2
U(m + 1) ... roU(o)!(m + 1) - 2mx(m)v1 !(m + 1) - v 1!n(m + 1) +
v 2 /(m + 1) + 2mv !(m + 1)
3
n
n 2
where VI a E Xi m+l; v 2 • t Xi
1;
i=1 ,IIJT'_
i-I '
a
-.L
m
ii(m). E xij!m.
and
j
o
It has been proved in [10] that the character-
l
istic function of the conditional joint distribution of (VI' v 2 , v )
3
given' {X ; ji: 1, '" n} depend~ only on i(m) and U(m). Thus the
1, •• , m
ij
conditional joint density of
{X
ij
}
only through
t
m
•
sequence and consequently
tm+l
= (x(m + 1),
(i(m), U{m».
{8m(n)}
Thus
U(m + 1»
{t }
m
depends on
is a transitive
is an invariant1y sufficient and
transitive sequence.
The decision rule at stage m 1s
where
8m(n) < 8m : (HI)
(H O) :
!m
<
.8m
and
gm
are the sanle as .En-I
97
(S.6.1)(b)
and
gn-l
of Procedure 1,
(5.6.2)(b)
1s not expressible as Slun of
m i.1.d. random variables, the properties
of the test cannot be studied by the above method.
The conjectural
formula (first introduced ill [8] and later used in [2], [lOJ and [24))
E (mIR!, t);
finds
! . 0, 1
E On Gm(~) IU > = a In A
o
+
as values of
m given by the relations
IH l )
E (In Gm(gm)
(l - a) In Band
l:'
B In B + (1 - B) In A respectively where the approximation is due to
the excess of the cumulative sum over the boundaries and the deViation
of the actual error probabilities from the desired ones.
E (millo'
and
t)
E (minI'
t)
may be obtained from
n _ 1
1 + md
m(6 l - 60>
O
2 {In 1 + m6 + tel + ms )}
1
1
n _ 1
1
2 (In 1
5.7.
+
+
m~
m\
Thus
=a
m(6 l - 60 )
mIS
O
>) = B In
+ t(l +
(5.6.3)(b)
In A + (1 - a) In B,
B
+
(5.6.4)(b)
(l - B) In A
PM Randomized Btock Design
We have two factors 'variety' (V) and 'block' (8), each of which
ean have infinitely many levels.
and
m > 1 blocks, x
and jth block;
X
ij
• P
+
u
ij
1
a
1
+ wj +
In a random sample of
is observed under the influence of
I, •• , nj
Z1j;
j =
1, •. , m.
The model
i·
1, .. , nj
j
is a special cnse of (5.2.1) with
n + m,
S ...
2
Ow ~
s
2
•
O.
« nO
p •
1) (
= 1,
'"
(5.7.0)
m
2
2 and with c..·,nponents of variance "0 > 0,
~~
-
1) )-1
~O ~O(X<i·j
1:1 j:l
98
tth variety
{c1j } suitably chosen, with
The preliminary sample of
-
n > 1 varieties
nO
0
t · nm,
2
> 0,
u-
varieties and mO blocks gives
- Xi' -
•.
)2
x' j - x . . .
On
hI
e pro
1
em s
to test
2
2
He: 0u/Oo • 60
VS. Hl :
2
2
Hi: -r • 8 (1 +
(5.2.3L we helve,
2
2
0uloo •
~1)
tr. ...
~l'
Corresponding to
i '" 0,1;
1
2
= mcr~ + o~,
A set of m > 1 blocks is chosen in
Procedure under Scheme (a):
the beginning and a new variety is inr.1uded in each block at each stage
(two in the first stage),
Total data at stage
n-1
{Xij ;
is
i . 1, '"
n;
j
= 1,
."
m}.
The reduced parameter space is 0· {~, T, T11~ < p < • , T > 0, T l > 0)
222
where T 1 • maw + cO' The group G of location transformations leaves
the problem invariant and a maximal invariant on
transformation
a·
for
(~, i t
(p
+
ll)
a, T, T )
1
is
T
e
under the induced
2 • A set of sufficient statistics
t n- 1 • (x •• , variety sum of squares (V.S.S.),
is
block sum of squares (D.S.S.»,
(x,. + a, V.8.S., B.S.S.)
The group
G induces the transformation
on the sample space of
n
2 as a maximal invariant,
V.S.S. - m E (Xi' - x •• )
Transitivity of
t-l
{t _ }
n l
in this case as well as in the following cases is verified
similarly as in the one way classification,
n-1
The test procedure at stage
is
where
(5.7.1)
where In-1
and
-&0-1
are the same as in (5.6.1).
Formulae giving
In Gn_ 1 (gn_1(m») and h are the same as 1n (5.6.2) and (5.6.3) respectlvely (s 2 being as in this section) and
L(T 2/002)
2 100'
2
= k[L(T
2 2
a
I-
t)t 1-1exp(-lt)dt,
(5,7.4)
2 2
(5.7.5)
E (v IT 100) • k oE (vlt 100' t)t
99
1-1
exp(-tt)dt
2 t) are obtained from (5.4.4) and
L(T 2 /0 2 • t) and E (v IT2100.
0
2
(5.4.5) respectively; E (ZIt), E (Z It) being as in ~~ctlon 5.6 n~d
where
where
k. 1 t /ft.
t · (n -l)(m -1)/2,
O
O
A block containing the same f1xed
Procedure under Scheme (b):
number n
t
(P.
••
of varieties is chosen at each stage.
> 1
(x••• ,
I)
V.S.S., 8.S.S.) is a sufficient statistic for
2
n
t (xi' - x •• )
ID.
t-l
the transformation on the sample space of
T, T
and
invariant on
g (n) •
II.
is a maximal invariant for
t. with
T
2
as a maximal
e.
The decision t'ule at stage
is
II.
(S.7.1)(b)
where
.am
In (G.(&m»
E (aIH )
l
and
-8m
are the same as .in-l
-gn- 1
is the same as right side of (5.6.2)(b),
Since
of (5.7.1).
E (mIHo)
and
may be obtained from (5.6.3)(b) and (S.6.4)(b) respectively.
Similar hypotheses about
interchanging V and
5.8.
and
2 2
0w/oO
are tested exactly similarly by
8.
Testing Interaction Hypotheses in a 2\)o-Way Classification
With Balanced Replications
In case of interaction between varieties and blocks, we modify the
model (S.1.0) by adding an additional random component
assume availability of
om (variety - block)
the observation
r > 1
I
ij
• We also
replications corresponding to each of
co~bination.
Thus if
x
ijk
(k· 1, ••• r) denotes
from kth replication, ith variety and jth block. the
model is, x ijk • P + ui + wj + I ij + Zijk;
k • 1••• , r (a special case of (5.2.1) with
p - 3 and components of variance
o~
100
> 0,
1· 1••• , n; j • 1, .•• 11.;
t · nmr.
s· n +
0; ~ 0, 0; ~ 0,
II.
+ nm.
a~ ~
0).
Hypothese~
is of detecting the presence of
int~raction
between varieties and blocks.
(if any)
A first
replications
that
ProcE:dure (1):
We consider a sampling scheme where an additional
variety is introduced at subsequent stages (two in the first stage) for
each of a fixed number
of (replication - block) combinations.
rm
At the (n-l)st stage, a set of Bufficient statistics for
e•
{1I, T, T l' T 2 1-
2'
2
2
< J.l <
2
GO
2
,
T > 0,
2
1:
2
(where T1 •
1 > 0, T 2 > O}
2
2 • n~w + ra l + cO) is t n_l • (x .•• , V.S.S., B.S.S.,
Interaction sume of squares (I.S.S.». A maximal invariant on 9 under
mra u + ra I + 00'
T
the induced transformation
(ll + a, T, T , l2)
l
is
t
2
with correspond-
ing maxim31 invariant
n
a
t (xij • - xi"
1-1 j-1
gn_l(rm) • r t
2
- x' j
' -
x••• )
1J under the induced transformation
n(x ••• + a, V.S.S., B.S.S •• I.S.S.)
on the sample space of
{t
Thus' {gn_l(rm)}
sequence and
and Sn_l(rm)
is an invariant!y sufficient and transitive
2 2
8n-1 (m) - T • X(n-I) (m-l) (0). The decision rule is
is obtained by replacing
101
B by A.
n-l
1 + r6 0
Vi
1
1n G _ (gn_l.(rm». t lm-I In
. - -(-....;;;......,-n 1
1.-1 2
1 + ." ~1
28 2 1 + rt\
h
were
Vi -
T
2 ·X 2m-l (0').
Thus with h determined from
(5.8 •.3)
(5.8.4),
(5.8.5)
where
L(T
2 2
laO'
respectively;
22
and E (v 1T laO' t)
t)
E (Z2 It )
E (Zit),
(5.4.7) by changing k
ko
•
to
are as in (5.4.4) and (5.4.5)
being detendned from (5.4.6) -
m-l, c
to
r
and where
va· nomo(r o-l),
v /2
(v '2)
a
O
Ir(v /2).
O
Procedure (i1): Here the sampling scheme consists of introducing an
add:ftional block at subsequent stages (two in the first stage) for each
of a fixed number
rn of (replication-variety) combination.
All formulae are obtained from those of Procedure (1) by interchanging m and
n.
Procedure (ili):
If we let the number of replications vary with
each stage, the test statistic (which we now denote by
same
8S
for Procedure (1).
The decision rule at the
8 (nm»
r
i8 the
rth stage is
(5.8.1) (iii)
where ~(nm)
and
8r (nm)
are the same as
(5.8.1).
102
~\_l(rm)
and
i u- l(rm)
of
(5.8.2) (iii)
where k 2 n (n-l)(m-l). The equations givinC E (rIH ) t) and
O
E (rIH I , t) are obtained from (5.6.3)(b) and (5.6.4)(b) respectively
by changing
m to
rand (n-I)
to
k •
2
The Two-Stage Nested Design
5.9.
We get this design if the blocks ('subgroups') in the model of
section 5.8 are nested within varieties ('main groups').
The general
model (5.2.1) now becomes
Xijk
- P + u i + wij + Zijk
(1· I, •• ,
D;
j.
1, '"
m;
(S.9.0)
k • 1, •• , r)
with t · mar, 8
2
2
(J
> O. 0 > 0).
uw-
•
n(m+l), p • 2 and components of variar:ce a~ > 0,
O;/a ·
;/0 •
One hypothesis of interes t is HO :
~ 60 VB. HI: a ~ 6 r
2
-1 nO ·0 r O .
.
2
s • (nOmO(r O-1»
t
t
t (X
be given by a first
ijk - Xij .)
Let
1-1 j-l k-l
stasc &smple of size DOmOrO'
8i
:
T
2
• s2(1 + rlS i );
Then, corresponding to (5.2.3). we have
i - 0, 1.
0
We use a procedure which chooses
~
6
0
< 6 ;
1
T
...
1'0; + a~.
n > I main groups and
subgroups within each main group in the beginning.
observations irom each of the
2
m> 1
We start with two
om subgroups and an additional observa-
tion (if required) is drawn from each subgroup at subsequent stages.
Under the group
G of location transformations, the problem remains
invariant. with maximal invclriant
n m
2
gr_1(nm) • r t
t (xij • - Xi")
on the sample space of
i-1 j-1
(x•••• main group sum of squares (M.S.S.).
103
t r- l •
sub-group sum of squares
(s.s.s.»
under the induced transformation (x ••• +a, M.S.S., 8.S.S.).
222
+ raw + Ci O•
u
A maximal invariant on
< ~ < . , T > 0, T > O}
l
2
is t
under the induced transformation. The decision rule at (r-l)st
t
r _l
is sufficient for
e· (~, 't, 't'l)
e.' {~, T, Tll~
wh~rc
2
T1
•
Illru
stage may be expressed as
(HO' : ~r_l(nm)
where Br_1(nm)
8r _l(nm)
<
and Sr_l(nm)
(5.4.1) by replacing
q
and
<
8r _l(mn)
(5.9.1)
: (H l )
are obtained from ~
c by
n(m - 1)
and
r
-Iv
and
of
respectively.
Also, in the notation of section 5.4,
gr_l(nm)
1
2s 2
(1 + r6
1
1
- 1 + r6 )
0
(5.9.2)
A comparison shows that for
and E (~tHl' t)
equations giving
E (~IHO'
t)
are obtained from (5.6.3)(b) and (S.6.4)(b) respec-
tively by replacing m by
T(1(;f.8
5.10.
r · ~ + 1,
rand
0-1 by
n(~I).
Unclep Mixed Models
Randomised Block Design
We consider the model (5.7.0) where wj's
are nov m constants
m
satisfying
r.
w • O.
j-l j
2 2
.22
HO : 0ulaO ~ 60 against H1 • ou'aO • 41 ,
The procedures under schemes (a) and (b) of section 5.7 may be used
The problem 1s to test
without
and
~ny
gm(n)
The derivation of the test statistics
g I(m)
nunder the respective schemes and the proof of their being
change.
invariant1y sufficient and transitive are easily obtained from that
section.
104
5.11.
T#Jo-Way CZacnification L>!th Bal.anc.:d JlopUcations
m
The model of section 5.8 Is further restrIcted to
I w
j-1 j
m
r I
j-l 1j
.. 0 (i • 1, .• , n)
- 0,
where Wj's are. constants.
For testing HO :
Procedures (1) - (iii) of section 5.8 give exactly the same testo.
In this case, we can also test HO : O~/O~ • &0 vs. HI: o~/O'~ - 6 1 "
2
2
2
2
2
Writing T • 0 0 + mroui (5.2.3) now becomes Hi: T a S (1 + mr5 i )i
1 • 0, 1 where s2 is as in section 5.8.
We consider the sampling scheme when an additional replication is
introduced at successive stages for each of a fixed number of
nm
(variety-block) comb1,nations.
Under the group
on the sample space of
G of location transfor-mations, a maximal invariant
t
r • (x'l' •• , x. 1I , V.S.S., I.S.S.) under the
induced transformation is
n
8 r (nm) • mr t (xt •• - x ••• )
i-I
2
with maximal invariant
.. ,
T
> 0,
T I :>
-
(Ha) : ~r(nm)
where ~r(nm)
q
2 on
i .. 1, ,., IIi
O}
The test procedure at
changing
< Wi < .. ;
T
and
to
< gr(r~) <
is
8r (nm)
: (H1)
are obtained from Sv and
8r (nm)
n-1 and
r
stag~
c
to
mr.
n - 1 In 1 + mr6 0
1n Gr(Sr)·
2
1 + mr~O
'.)
(5.11.1)
iv
of (5.4.1) by
Also, since
8 r eum )
1
?
(1 + mr~
2s"
u1
1
1 + ar6 ) (S.11.2)
0
and E (rIB 1 , t)
E (rIHO' t)
are obtained from (5.6.3)(b) and (5.6.4)(b) respectively by replacing
a comparison shows that equations giVing
by
111'.
105
11
5.12. The
7\)0
StagG N.. :;!'od Design
We consider the following two cases of model (5.9.0).
n
(1)
u's
i
are
2 2
Hi : 0w/oO •
n
°1 ,
constants and
t u • O.
~l i
In this case. test for
i · O. 1 is the same as in section 5.9.
The test
statistic is derived by an argument similar to that section under the
same sampling scheme.
m
(i1)
Wtj'S
are nm constants satisfying
t wij • 0 (1 • 1•••• n)
j-1
and ut's constitute a random sample of n main group effects. The null
2 2
2 2
and the alternative hypotheses are HO : 0u/oO • 60 and HI : out~o • al •
Let 8 2 be as in section 5.11; so Hi (1 • 1. 2) are the game 8S in
that seetion.
Procedure:
Firat choose n > 1 main groups and m > 1 aubgroups
within each main group,
the om subgroups
~len
start with two observations from each of
and draw an additional observation (if required) at
each subsequent stage.
Proceeding as in the preceding section. we derive
n
2
the same test statistic mr t (Xi.' - x ••• } •
For v· r - 1.
we
i-I
observe that
E
(vlui •
t);
i · 1. 2;
106
are the same as in section 5.11.
5.13.
NW'l6rica Z Eva lua tion and Ccmp..:zz-ieon
Table
_~... 13. ~
OC Function of the Sequential
X2-Test in the One Way Layout
Under Scheme (a) (the number (m • 5) of observations per
group is fixed;
preliminary sample consists of 5 groups with
11 observations in each;
2'
T '/0
0
0.5
1
1.25
1.5
1.75
2
2.25
2.5
2.75
3
60 • 0,
OC
61 • 2;
T
2
2
lao
0.9965
4
0.9~52
4.5
5
0.875
0.803
0.728
0.654
0.586
0.524
0.468
0.419
6
7
8
9
10
11
11.5
107
a· 0.05 •
a>
OC
0.278
0.232
0.196
0.145
0.111
0.0885
0.0723
0.06011
0.0512
0.0475
-
e
e
Table
5.13.2
2
.
Compari8on of the A.S.N. of thP. Sequential X -test with the A.S.N. of the Sequential
F-te8t and the Sample Size of the Fixed Sample F-test in the One-Way Layout (initial
sample consists of 4 groups and 9 observations from each: 60 • 0)
Scheme (a); Procedure I.
The number (m) of observations per group is fixed.
61 • 1
a • 0.05 • a
2
Sequ-x
Sequ-F
I
.....
o
CN
.."
Yixed-F
16
--
3
57
4
52
55
5
6
7
I 39.8
- -
Ho
24.64
25.03
26.17
27.58
29.1
30.65
34.1
3'•• 1
35.6
37.5
39.1
54
56
0. •
0.01·· S
Sequ-x "
,L
Sequ-F
,Fixed-F
•
150
I
111
100
95
96
105
He
65.6
55.4
54.8
56.6
59.3
45.56
45.46
47.02
49.21
51.66
54.23
62.~
6.1 • 2
a • 0.05 •
2
3
4
5
6
7
38- .
-
33
32
35
36
42
I
Hn
20.9
20.5
22.2
24.3
26.6
29
a • 0.01 • B
2
Sequ-x
Sequ-F
Fixed-F
a
Sequ-F
Pixed-F
13.09
14.55
16.11
17.6B
19.22
20.73
72
57
56
55
60
63
I
Sequ-x
2
HO
33.6
32.4
34.7
3" .6
'.0.9
44.2
23.53
25.83
28.43
31.07
33.69
36.27
e
e
e
Scheme (b).
•
~-~--......:l
[
(J
·
n
Fixed-F
6
7
60
56
8
56
9
10
11
12
13
14
54
...$
The number (u) of groups is fixed
o .. 1
SO
S5
48
52
56
•
:Tt
0.05 .. :8
2
Sequ-l(H )
O
Sequ-X (H ) lixed-F
O
38.1J
35.7
34.5
33.9
33.8
34.06
30.64
29
27.81
27.05
26.62
26.31
26.12
26.05
34
34.3
34.8
35.4
~
..
0.01 .. S
2
162
140
120
108
110
99
96
91
98
Sequ-F(H )
O
- ---=:
6
Fixed-l
30
7
28
8
32
27
30
33
36
39
9
10
11
12
13
14
28
n
20
20.4
20.2
20.3
20.7
21.2
21.8
22.5
23.2
~l • 1
..
86.14
70.3
61.53
56.3S
52.7
50.12
48.16
46.93
45.92
----::J
2
Sequ-x (H ) Fb:ed-F
O
17
84
I
15.32
14.52
13.95
13.58
13.47
13.22
13.16
13.1
..
Sequ-F(H )
47 O
70
64
63
60
55
60
52
56
61 .. 2
0.05 ·Ila~~(f.-or---;--Bra 0.05 • afa-" o~oI ~·S
2
2
2
Sequ-X (H ) Sequ-x (H )
SeqU-lCR ) Sequ-x (H )
O
O
O
44.14
13.16
22.15 O
26.14
22
43.86
13.2
26.25
21.89
43.55
13.25
26.39
21. 93
43.42
13.33
26.55
(J
17
18
19
Sequ-F(H )
21.4 O
Sequ-X (H )
O
90.2
75.2.
67
62.1
59.1
57.1
55.9
55.1
-54.7
0 .. 2
1
rL---.-~-"--:O=--.O~l-.-§
( - - - - ( 1-.--:0~.0~5-"-e
n
I
39.9
36.2
34.1
32.9
:)2.3
32.1
32.1
32.4
2
Sequ-x (R )
43.07 a
35.15
30.76
28.19
26.37
25.1
24.19
23.52
23.01
CHAPTER VI
SOME EXACT RESULTS
. 6.1.
Introduction
Bxact properties of sequential tests of composite hypotheses have
been derived in some spacial cases of probability models.
Inferences on
the location parameter of the exponential distribution, with the scale
parameter unknown, are made in section 6.2.
give a streugth at least
(~,
0).
The teet may be chosen to
An analogous test for testing a para-
meter of Pareto distribution in the presence of nuisance shape parameter
is discussed in section 6.3.
Section 6.4 contains a procedure for
testing the mean of the rectangular distribution with unknown variance.
A description of a procedure to choose one of the three hypotheses about
the mean of this distribution is in section 6.5.
6.2.
Pesting Location Parametel't of the E:cponentiat Distribution
X has probability density function g(x, el , 8 2)
The null and
HO : 6 1 • 6 10
tb~
given by (4.9.1).
alternative hypotheses are
vs.
HI: 6 1 • 811 ;
6 11 > ala;
82
unknown.
...
The maximum likelihood estimatcr 8 of 8 given by the initial
2
2
sample·
and i '
DO
{Xi•.. ,
x~o}
_.
1s x
no
. (1"0)
-
x(l)
1s the assn of this sample.
where
(r)
xC!) • min(~l' •• ,
The density function of
...
xr )
82/a 2 1.
0-1 DO-2
11
pet) • DO
t
exp(-not)/r(nO - 1);
t >
A
.
Writing Zj(02)
Zj(02) • -- for
xj
D
1n (g(xj
<
010;
,
j.
0
(6.2,1)
A
811 , 8 2)/B(Xj
_
810 , 02»
1, 2, ,., ;
we have,
and defining
- • for
(6.2.2)
For a constant C > 0
strength of the test);
(to be determined later to achieve a desired
..
we define 8(02' 62)
to be a test which follows
the decision rule:
accept BO if
accept HI
(n)
x(l) < 811 ,
..
if n ~ C 62/(°11-°10)'
(6.2.3)
otherwise continue by observing xn+l
Pztopsl'ties of the Test
...
Let n(t)· [Ct0 2/(Oll-OlO)] + 1 where t · 82/8 2 and where
denotes the largest integer less than or equal to q (> 0).
L(ol'
82 • 8 2)
• Pr{S(8 2 , 8 2)
accepts
[q]
HOI0l' 8 2 • 02} ,.,
Pr{at least one of the first net) observations falls below 81lI81.e2,82}
•
(6,2.4)
III
(6.2.5)
•
Thus L(e l , 82 .le 1 < 811) . I (1 - exp{-(Oll-8 1)J/6 2})(G(j) - G(j-l»
j-l
which, after integration 8ftd a rearrangement, gives
L(e l , 8 2 18 1 < 8 11) •
nO-l
r
(-1lo!1)
Ill'<
nO+r-~
nO+r-l.
1 - ~r~(n--o---l~)--r:O rl(n +r-l)j:l{j
- (j-1)
}exp{-(8 11-8 l )j/e 2)
(nO'l)
•
o
(6.2.6) .
Now for
61 < 811 , we aee that the following relations bold:
a (6 1-8 11)j/e 2
8 (s)
(811-81)/8 2
-1
•
J: j e
• 8 2De (e
1
.1-1
•
(0 1-9 11)j/e 2
8
t (j-l) e
j-l
consequently,
•
,.J
•
t U - (.1-1) }e
• (8 1-9 11) Ie 2.- (s)
(s)
-xl, 1 (e (6 11-8 1) Ie 2 -
• 8 2e
(8 1-8 11H/8 2
.1-1
where De
- 1)
a
• 8 (l - e
2
and
(9 1-6 11 )/8 2 (s) (8 11-8 1 )/9 2
-1
)D e (e
-1)
1
(6.2.7)
s
( ••• ) .
-1
1)
Cl
--SC
.•• )
partial derivative (v.r.t.01)
of the expression viti,Ln the parentheses. Thus (6.2.5) - (6.2.7) imply
1
1s the
8th
aS1
L(GI' 8 21° 1 !. ell) - 0 and L(el' E}2101 < 811) 1-(1-e
(8 1-8 11 )/6 2 (DO/l)
)
The A.S.N. of
nO-1
r (n -1)
o
..
r
•
I
1"""0
S(8 , 8 )
2
2
(-nO'l) nO+r-1 (DO+r-l) (811-oi/e2 -1
rHo +r-l) 82
De
(e
-l)
0
1
(6.2.8)
is liven by
112
net)
. ( (j-l)
E (nt ol , 62 , 8 2) . t j Pr,x(l)
j .. l
(n(t»
net) Pr{X(l)
~ Bl1 }·
Thus
~ 811 , Xj
<
ell} +
E (n Ie1 , 8•21 82 , 81 < 011) •
-(8 11-8 1)/e 2 net)
-«8 11-8 1)/8 2) (j-l)
) I j e
j-l
(after simplification) n~t) e-«6 l1-0 l )/E2Xj-l) •
j-1
-(8 -91)n(t)/e
2)/(1 - e -(8 11-91)/6 2). E (nle , 8 , 8 , 8 J:: 8 ) 11
(1 - e
2
2
1
1
11
net). Multiplying by the density of net), summing and proceeding as in
n(t)e
-«Oll-8 1)/8 2)n(t)
+ (1 - e
the case of OC function, we obtain,
(l-e
(6.2.9)
•
•
where E net) • j:lj{G(j) - G(j-l)} - j~lj(rp(no-l) - rq(no-l»/r(no-l)
(6.2.10)
where
p - n
031t, q - p - (nO/t)
and where
rx(a)
is an incomplete
..... function defined by rx(n). I:tU-1e-tdt.
ChoiCfJ of C to Control 'First }~ind of E:t'l'Or' P1tobability
Since Pr{HO is acceptedle1 • OIl} • 0; the test baa power one. To
ensure tbat probability of accepting HI (when 8 • e ) is controlled
1
lO
by a. 8~; we need to find C which satisfies L(8 , 8 ) ~ 1-0.
10 2
Using net) > 1t and 811 > 810 , we have, -(611-610)n(t)/82 < -Ct. Thus
(6.2.4) i1llpl1es
.
.
Pr{S(8 2, 92) accepts HllolO,82,62}
113
<
exp{-(8l1-810)n(t)/e2}< exp(-Ct)
Multiplying by the density of
integrating and simpHfying, we have,
t,
-(n -1)
O
»
1 - J,(O 10' 8 2 ) < (l + (C/nO
nO(a
-l/(n -1)
0
- 1)
.
Thus the choice C·
ensures that tfirst kind of error probability' ia
always less than a.
A lower bound on the error probability will be
found below.
B(lunds on the OC and A.S.N. Functions
Using the inequality
£t < net) 5:. Rot + I, we have, for :'8
< 8 ,
11
1
exp{-(6 l1-6 l )lt/9 2 ) > exp(-(el1-~)n(t)/82} ~ exp{-(811-81) (tt+l)/9 2 },
E exp{-(8 11-9 1)lt/8 2} > 1-L(01,8 218 1 < 911) ~
thus,
E exp{-(9 11-9 1)(It+l)/8 2}, which simplifies to
-( \1-81)/9 2
C(SleSl)
-(DO-I)
(1 + nO(ell-e1~»
-lIen -1)
The choice C· DO(a
0
- 1)
(6.2.11)
1 - e
gives the bounds
a exp{-(911-9l0)/62} ~ 1 - L(8 10 , 82)
<
(6.2.12)
a
on the probability of wrong decision when HO is true.
Using
E t • (n -1)/nO' we similarly obtain the following bounds on the A.S.N.,
O
1 -8
-1
)
C8 2(1-nO )(8 11-8 10)
< E (n e ,o2,8 >8 ) ~ 1+C8 (1-n-)(8
1
2
1 l1
11 10
O
-1
-1,
(6.2.13)
(nle 1 ,
82t 91 ~ 911 ) will either be equal to its
upper bound or to its lower bound. Also
It follows that E
114
(1 - e
(6.2.14)
For C· n <a-
o
l/
(no-l) - I), we have from (6.2.14), when H is true;
O
(6.2.15)
6.3. !testing a Pal'alnetel" of Pal"eto Distnbution
Xl' X2 , ••• are independent with a common p.d.f. g(x, a, k)
given by (4.8.1). The hypotheses to be discriminated are
k ;
l
•
•
The initial sample {xl' •• , x } gives
DO
8
l\>
A
8 •
being unknown
D
(6.3.1)
' (nO)-l
O{ t In xi/xC!)}
i-I
the test
.
Sea,
,
a)
is defined by the decision rule:
(11i)
(n)
,.
kl (11) accept Hi 1f n
otherwise continue by observing xn+l.
(1)
accept HO if
x(l)
<
~
CIa 1n(kl /kO)
(6.3.2)
A
Writing t · Cia In(kl/kO)'
t . ala,
115
n(t)· [lIt] + 1, we see that,
Pr{n(t) • j) - Pr{j-l ~ tIt < j} ~ C(j) - G(j-l)
where G(j)
has
been defined in section 6.2.
Properties of the Test
a, a) • Pr{S(;, a)
Holk,;.
accepts
a} • 1 - Pr{x(n(t»
(1)
...
kIlt, a. aI,
so we easily obaerve that.
A
L(k.
a, af k ~ kl)
Since L(k, alk
•
I {I
j-l .
- (k/kl )
>
-
aj
an(t)
'"
• 0
and L(k,
a. al k <
k ) • 1 - (k/k )
l
l
.
<
}
(6.3.3)
k1) - t {I - (k/k1)aj}{G(j) - G(j-l)} •
j-l
(nolt)
n O-1
ren -1)
j
I
0
t
0.0-2
e
-(nolt) t
dt, we obtain, after
j-l
eimplifyins,
L(k. alk ~ kl ) • 0 and t(k, alk
1 -
.
l
j-l
<
k1) •
(k/kt)ja Cr (nO-I) - rq(nO~l»/r(nO-l)
where p and q are obtained from (6.2.10),
and where rx(a)
net)
t
t
being
4S
in this section
is as defined before.
The expected sample elze of SCa, a)
j-l
(6.3.4)
p.
j pr{X(j-l) > k
X < k }
(1)
I' J
1
...
1s E (nlk. a. a) •
+ net) rdx(n(t»
(1)
> k}
-
I-
which 8ives
E (nlk, ;, a, k ~ k1) • net) and E (nlk, ;, a, k < kt ) •
t j(k/k )a(j-l){l -(k/kl)a} + n(t) (k/kl)an(t) • After simplifying. we
1
j-l
have,
...
E (nlk, ;, a, k ~ kl ) • n(t) and E (nlk, a. a, k < k ) •
1
{I - (k/k )an(t)}/{1 _ (k/k )8}
(6.3.5)
1
1
net)
Multiplying by the probability function of
116
net)
and summing,
E (ntk,a,k:>kl)·
•
t j(f (nO-I) - rq(nO-l»/f(nO-n and E (nlk,a,k<ki
j-l P
r
• (1 -Oc/k1)a)-1{1 (k/k 1)j3(r (nO-I) - rq(nO-l»/r(nO-l)}
j-l
P
(6.3.6)
Choiee of C to Achieve a Stren(!th At Least p., 0)
Pr{Ba is aeceptcdlk1, a}
is acceptedl Ito' a} ~ a
pr{H l
L(k, a)
is
O.
a
We need
C such that
L(kO' a) ~ 1 - a
which implies
where
Solving L(kO' a) • 1 - a for C is quite
complicated. Since n(t):> (tIt), k < k • we observe tht:t
O
1
I
'
"
Pr{S( •• a) accepts HI kO' at a} < (kO/kl >81ft • e-CIt • Multiplying by
8S
in (6.3.4).
A
the p.d.f. of
and integrating. we obtain,
t
-(n -1)
is aceeptedlkO' a} • (1 + (C/nO»
0 • Equating the right
side to a, we obtain,
-1/nO-1
C • DO(a
- 1). A lower bound on the probability of first
Pr{Hl
kind of error will be obtained below.
Bounds on
ac
and A. S. N. Function8
The inequality (tIt) < net)
(k/k )8(k/k )a!/t
1
1
~ (k/kl)an(t)
~ (tIt)
<
(k/k
>Az
1
at/t
DO
1
this together with (6.3.3)
(k/kl)at/t.
(k/k )8t/t ~ 1 - L(k,
1
a1
k -(n O-1)
E (k/~)
• (1 - In -k)
,
DO
I
the OC function,
a"
k -(nO-I)
1- (1 - - 1n "k)
< L(k,alk<
gives
+ 1 implies, for k < k ,
1
alk
< k ) < E
l
(k/k )at/t.
1
Since
we obtain the following bound on
it.
a1
k l ) ,~ 1 -<'k) (1- I
DO
It -(nO-I)
In "k)
1
(6.3.7)
-1/(°0-1)
If HO 1s true, tbe choice C· nO(a
- 1)
a(kO/k l )· ~ 1 - L(tO' .)
< U
111
gives,
(6.3.8)
which provides a bound on the first kind of error probability. Since
-1
E (tIt) ". 1 (1 - nO ), (6.3.5) gives the following bounds on the A.S·.N.
function:
I
-1
1(1 - nO ) < E (n k,
which implies that for k
.
-1
k ~ k l ) ~ 1 + t(l - nO )
8,
~
k ,
l
upper bound or its lower bound.
E (nlk, a)
(6.3.9)
will be equal to either its
Also,
6.4. Testing the Mean of tM Rectangular Dismbution
Let
{Xl; 1 • 1, 2, ••• }
be independent with a common probab:Uity
denaity function
a(x, a,
a) _0-
1
for
e--<J/2~x~e +0/2
(6.4.1)
The Dull and the alternative hypotheses are
BO : e ~ eO
HI: 8 ~ 8 1 t
va.
-
< eO < e 1 < • t
The first stage sample of size nO
gives
estimator of
•• J
0
as
. ruge
..
(xl'
(J
•
.
-nO
D O-2
p(u) • n (oO-I)o
(au
O
no-I)
U
t
X
.
a,
no )
0 ~
U
<
unknown •
(7
the llaximum likelihood
and t~ p.el.f. of
(J.
Let
(f.
...
pa
..
(J
i.
where p
I(p). PrIX < 90 - o/2} + Pr{X > 81 + a/2} <
- + Pr{X > 6 + a12} and I(p) Is a strictly decreasing
Pr{X < 8 - a/2}
1
0
function of p.
18
areater than 1
80
that
..
118
In the usual notation, we have, for
-- if
eo-0/2 -<
0
if
6 -0/2 ~ x j ~ 6 +0 /2
1
0
co
if
6 +0/2 < x ~ 8 +0/2
1
j
0
Zj (0).
.
x
<
j
.
(2
and
xj
j.
1, 2, •••
1
0-0/2 and Zj(O)..
be the teat with the decision rule:
Defining Zj(O). -- if
let 8(0, 0)
9 -6 <
1 0
0 -0/2
if
< 9
xj > 91+0/2;
Accept BO without taking further observations if 61 ~ 60+C,
otherwise accept or reject H according as the lower or th~ upper
O
inequality in 9 -0/2 < xn
1
Since Pr(u > i19, a,
> 6
a
< 8
+Q/2
is first violated for n!.l
0
1-e O} • pr(8 1-O/2 <
x < 90+0/219,
a!. 91-6 0+0, the
continues
.. procedure
..
0
a a
follows that if
probability one for
6
£
[6 1 -
(2
'2 + 2'
60
~6.4.2)
a, a)D;
it
indefinitely with
+ 2 - 2'], Thu. we
assume
80 + (p-l)o/2 < 81 - (p-l)a/2
(6.4.3)
r1uJ Pl'obabi Ute of Accepting the NuZZ Hypothesis
In the usual notation. we have,
(1)
for
e~ 80 +
.
,ala
E:
L(8,a
..
t (
n-1
L(O,
(i1)
.
a
eO + 2';
-
6 0 - 8 1 + a n-1 6 -9 1
a
a,
)
(-
j.
-
1, •• , n-1;
a
2'a + 'f
a
010£ (2(8 - 80) +0,
for
a, ola& [0, elat O))
L(9,
• 1,
a» •
(8 -8 ,2(8-6 )+
1 0
0
a
:'lPr{e1 - 2' < Xj <
..
(p-l)o/2;
)
po».
•
1.
8 1 -9 -
Xn < 81 o a
..
2" + 2'
-r---:::-----::~-
e1
-6 0 - iJ + a '
..
,-
8 0 + (p-1)a/2 < 6 < 01 - (p-l)a/2; L(O,a,ala
119
a
2'} •
E:
[0,8 -9 ]) • 1,
1 0
integrating and observing that for a positive integer
n
we have, after e1mpl1fy-
{1Ilpo
{or - (8
-+e -26)1) + (nO(nO-l)}1 O
1
ft
{(pi')
O-nop' (2(0-8 )iO) ftO-1 +
0
(no-1) (2(e-t O) + 0) 11o}]
where
1,.
(11)
nO(na-1)(pO)
for
(6.4.4)
-DO
,Il •
eO + (p-1)o /2
<
61 - e +
e
<
120
t:J /2
and
e1 - (p-1)0 /2
A· 8 1 - 8 0 + o.
(6.4.5)
fts B:cpectsd Samp'Le Sia6
In the usual notation, we have.
(i)
for
K (n18,
0,
e~
0,
81 - (P+l)0/2; E (D18,
a e (8 1-80 , po» • 1
a. a, a& [0, 81-80 ]) •
for 81 - (p+l)a/2 < 8 < eO + (p-l)a/2;
E (nle,
o. G £ [0, 8 1-8 0 ]) • 0, E (nle. 0. a,
0,
(il)
-1,
a,
if a >
2(8 1-6> a
i.e., 81 - 0/2 <
e + 0/2
a
t
then in the present case
<
eO + 0/2 and
121
80,
£
(8 -8 , 2(6 -8)-0»
1 0
1
a> 2(9-8 0) +
0.
•
t n(Pr{8
n-l
1 - 0/2
pr{'l - 0/2
Xj
<
80 + ~/2;
<
Xj
<
80 + 0/2;
<
...
•
t n(
8+~-8
+2
8
2)n-l~ 1
2 G 1
..
j •
.t,
j . 1,
.. , n-l;
1,
n-l.
-~-O+~
2 a
2) -
n-l
81 -
(iii)
for
80 + (p-l)o/2
E (nl'. ~, a,
a ~ [0,
present case
Pr{~
we have, E (nI8.
~
8
~
<
a
aG
2' -
80
+ o/2}] •
a
8 + '2
if 81-8 0 < a then ainee in the
~ 2(e-e ) + a} • 1 aDd Pr{~ ~ 2(8 -8) + a} • 1,
O
1
a, ~ (8 -8 0 ,
a, a
a+ a -
•
81 ';'1D(
(iv)
Xj
xn->
a12} +
'1-'0]) • 0,
pa» •
1
t n[Pr{el - ; /2 < X.. < 8 + 0/2;
0
n-l
.I
<
n- 1 -
81 - (p-l)a/2;
•
Pr{8 1 - 0/2
X < 8
eo + 0/2;
'0 80 +
)(
.. ,
e<
)
n-l;
Xn< 8 - a12} +
1
xn-> 80 + a12}] •
j . 1, ••• 0-1;
a- 91 n-l
0
for 81 - (p-l)aI2 <
1,
j •
g
•
'I -
~ +a -
80 + (p+l)a/2;
E
80
(nle,a,a,a
~
[0,81-80)
• 0, E (DI8,a.o,a ~ (8 1-80 , 2(8-80 ) -0) • 1. a
a
a ~
8
..
..
•t n( '0+2+ '2 0-1 e- 60 + '2- '2
! (n I" a, o. a £ (2(e-e O) - a.
.G
)
(0
)
n-1
pes».
•
fl
.....
lJ
G
, + I - eo - I
(v)
!
for
e ~ 80 + (p+l)aI2; £(nle, 0, a,
(nle, 0, a,
a£
a~
(0, '1-'0]) • 0,
(81-8 0 , pal) - 1.
Writing E (nI8, a) -
f E (nle, a,
a,
a• t)P1(t)dt
(Pl(t) being the
B· 2(81-8) + a, ~. 2(8-8 0) + a, integrating and
observing by (6.4.3) that po - B < 0 in case (ii),
po - A < 0 in
the p.d.f. of a),
case (ii1),
pfl - C < 0
in CBse (tv);
122
ve have, after simplifying,
for
(1)
e ~ ....
w.
(p+1)a /2
<nla.o). 1 (DO(nO-l»
E
-1
nO
{(PJ)
+ (8 1-6 0 )
nO-l
«n -l)(8
O
1
-eO)
- nOpa)}
(6.4.7)
for 80 + (p-1)a/2 ~ 8 ~ 81 - (p-1)a/2
8 -2
0
B (nle.a) • lo[A
{1n a - In((01-80) - (p-l)a)}{pa - A} +
D -1 D -1
r DO-1-r
~ (0 ) (-1) A
-{ (8 -8 _ (p_l)o)r _ or) +
(11i)
r
p1
po
1
r
0
n -2 n -2
r nO-2-r
(0 )(-1) A
[or _ (8 -8 _(p_1)a)r})
2
r-1
r
(v)
1
r
for
e~
80 + (P+l)0/2
123
0
(6.4.9)
DO
1
! <nl8,o) •
I,
<no (nO-l»- [p:7)
DO-1
{(nO-l)(&l-GO)-llot:O})
+(81"'8 0 )
(6.4.11)
6.5.
Choosi.ng 0n6 of ths ThJtec Hypotheses About the Mean of the
RetJ1XmgUZazt Dlstnbution IIi th UnknOf.Jn VaJ'imsae
The probability density function of X is as in (6.4.1).
We consider the problem of choosing one of the three mutually
exclusive and eXhaustive hypotheses
BO : 8 <
4
< aO < a 1 < •
-
HI: a O !. 8 !. a l ;
0 t
1i1)
80 <
o
<
0
: e • '0 va.
8 2 <0, 0)
e1
u
and
(8 , 8 )
2 3
~ 8 2 < 'a1 < 8 3
i1)
satisfying
81 - 80 • 83 - 82
a 1 • (8 2 + 8 3 )/2.
in .eetion 6.4, let S I (a, (1)
S<O,
be the test of
0)
be the te.t of
H : e • 8 1 with decision rule (6.4.2)
81
Be
: e • 8
2
2
va.
Be
: 8 • 8
3
rule obtained froll (6.4.2) by replacing 8 0 by
Let
1 ;
(6.5.1)
(8 0 , 8 1 )
aO • (8 1 + 8 0 )/2,
Delioins
Be
-0
8
a > 0
j
We choose iutaxva1s
i)
12 : 8 >
82
and let
3 with decision
aDd 8 1 by 83.
be defined by:
S(o, 0)
acc.apts 8
accept. 'I
accepts 82
accept. HI
accepta 81
accepts 8
3
accepta 82
2
accepts
It followa from the definitions of 8 1 (a, a);
S(o,
a)
Bo
accepts eO
i · I, 2;
that
is veIl defined aince
acceptance of both '0 a,1Kl
'3 1. ll1poaaible
124
(6.5.2)
Pros (6.4.3), a necessary and aufficient condition for St(o, a) (1-1,2)
to terlldnate witb probability one is that
0j_l + (p-l)0/2
<
ej - (p-l)a/2;
(6.5.)
j . 1, 3
Thus we assume (6.5.3).
DC F1InJJ tions
Let L. (8,
i
0) •
Pr{Si(o, a) accepts
a)
8jl8, 0, a}; (i,j) • (1,0),
a,
and L, (8, a) • E L (8,
(2,2)
S(o,
0,
a); i · 1, 2.
i
~
and by (6.5.2). ve obtain (denoting pr{B
i
. by L(RiI8, a)i
i·
Iy the definition of
.
ia acceptedle, a)
0, I, 2),
L(Bole, a) • L. (e, a); L(u1le, a) • L, (e, a) - L. (e, a).
1 2 1
L(82Ie, 0) • 1 - L <e, 0)
(6.5.4)
'2
If.e denote tbe risht lides of (6.4.4) - (6.4.6) by L{l'(e, a).
a1
i • 1, 2, 3 respectively and 1f we obtain L~i)(e, 0) fxc.
L~~)(8. a)
( t . 1 - 3)
by -aiDa 80
""d 281
to 8 2
aDd
83
respectively, we have (ueing (6.5.4» the following formulae for the OC
functions:
L(Bole, 0) • L~l)(&. 0)
1
• L~2)(e,
a)
if e ~ 80 + (p-l)0/2
80 + (p-l)a/2 < e < 81 - (p-l)0/2
if
1
• L~3)(8. 0) if
e ~ 01 -
1
(p-1)0/2
(6.5.5)
L(Rlle. 0) • L~l)(e, a) - L(1)C8, a) if 8 ~ 8 + (p-l)a/2
0
2
81
• L(1)(8, a) - L(2)(8. 0) if eo+(p-l)o,/2<8<9 1-(p-1)a/2
'2
sl
• L(1)(8, a) - L(3)(8. a) if 81-(p-l)oI2~e<82+(p-l)0/2
8
2
a
1
• L(2){8, 0) - t(3) (8, a)
8
2
8
1
• L.(3)(&. a) - L(3)(8, 0)
2
II
125
if
82+(p.-l)aI2<8<8 3-(p-1)0/2
if '~'3 - (p-l)0/2 (6.5.6)
L(H2,e,
0) •
1 - L(1)(6,
8
0)
• 1 - L(2)(6, a)
8
if
2
2
• 1 - L(3)C8.
8
2
a)
e2
82 + (p-1)a/2
if 82 + (p-l)0/2
if 8
126
~
<
e
83 - (p-l)a/2
< 8) -
(p-l)o/2
(6.5.7)
BIBLIOGlAlHY
(1)
Antle, C. E. and Bain, L. J. A property of maxi1D\l.Dl likelihood
estimatora of location and 8cale parameters. Siam Revi~
Z~ (1970), 251-253.
[2]
Appleby, R. H. and Freund, R. J. An empirical evaluation of
multivariate sequential procedures for testing Mans.
Annals of Mathematical. Statistic8 J 33" (1962), 1413-20.
(3)
Armitage, P. Sequential ana1ysia with IDOre than two alternative
hypotheses and its relation to discriminant function
analysis. Journal of ths Royal, Statist·;'aa't SOciBty"
SsM-BS B~ tz" (1950), 137-44.
(4) Bain, L. J. and Antle, C. E.
Inferential procedures for the
Weibul1 and generalized g8llllDll distributions. AeztOspaDe
R"«U"ah Labozoa1;()zei,ss, '!schnical Repol't. (November 1970).
(5)
Baker, A. G.
Properties of some tests in sequential analysis.
31. (1950), 334-46.
Biomet~kaJ
[6]
Barnard, G. A. Sequential tests in industrial statistics (with
discussion)• .TOu:l"lf4'L of t1w Royal Statistical Sooietyl
Saries B, 8. (1946), 1-26.
(7)
Barnard, G. A. Statistical inference. Jouma'L of t1"fS ROlJal
Statisti«d Soci..ty. S"nes BJ U., (1950), 115-43.
(8) Bhate, D. B. Sequential analyst. with special reference to
di.tribution of sample .ize. Ph.D. Thesis (1955). University
of LoIldon.
[9]
Francis, V. J. On the distribution of the SUID of n values drawn
fr01l a truncated normal population. Jouma1, of thB Royal.
Statistica't Society. Ssritls B, 8~ (1946), 223-32.
(10) Ghosh, B. I..
Sequential analysis of variance under rando. and
1Dlxed models. IOU%"n(l1 of the Amerioan Statistica't AsBociation~
. 8S, (1967). 1401-17.
[11] Ghosb, J. K. On some properties of sequential t-test. Catoutta
StatiatioaZ Association Bu'Lutin, 9, (1960), 77-86.
(12) Ghosh, J. It. On the monotonicity of the OC of a clalis of sequential probability ratio testa. Ca'tautta Statistical A8soaiation
Bu'tZBtin~ 9, (1960). 142.
(13) Hajnal, J. A tvo sa1Dple sequential t-test.
(1961), 65-75.
(14) Hall, W. J.
Biometrika~ 48~
Some sequential analo.. of Stein's two-stage test.
Biom8t~ka.
49,
(1~62),
367-78.
(IS] Ball, W. J., Wijsaan, R. A. and Ghosh, J. K.
The relationship
between aufficiency aDd invariance with applications in
aequential analysis. Annau. of NathsmaticaZ StatiStiC8~
38, (1965), 375-614.
(16) JacuOD, J. E. and Bradley, It. A. Multivariate sequential
procedures for tatins aeans. rtlchnlcaZ Reporl 10, (1959),
Vir&in1a Polytechnic Institute.
[17) Johuon, N. L.
80M notes on applications of sequential analysis
in the analysis of variance. Annals cf Nathsmaticat Statistics,
24, (19S3). 614-23.
(l8) Johnson, N. L. Sequential procedures in certain component of
variance probl... AnnaZs of Mat1umtatica't Stati.stics, 2S,
(19'4), 357-66.
(19) Johnson, N. L. and Kat., S. Continuous Univariate Distributions"
Votumes t and 2, Houahton Mifflin, Boston, 1970.
(20) larst, O. J. and Polowy. H. Samplina properties ,of a median of
a Laplace distribution. .Anrfatioan Mathematical, Monthll/, ?O~
(1963), 628-36.
[21] Mosh_n, J. A -ethod for a.lecting the aize of the initial aample
u Steinle two-atage proeedure. AnnaZs of Mathematical
Statuti.c.~ U. (1958), 1271-75.
(22) Myera, M. H., Schneiderman, M. A. and Andtag8, P. Boundaries for
cloaed (vedae) sequential't-test plans. Biometrika, ~3,
(1966), 431-37.
(23) Paul.OIl, I. A aequential procedure comparing several experimental
categories with a standard or control. Annals of Mathematical.
Statistics" aa.. (1962), 438-43.
(24) Ray, W. D. Sequential analysis applied to certain experimental
designs in the analyeis of variance. Bi.ometrika, 43, (1956),
388-403.
(25) Rushton, S. On a aequential t-teat.
326-33.
Biomstl"Uu:%.1 31" (1950),
(26] Ruahton, S. On a two-aided sequential t-test. Biorrtetl'ika, U,
(1952), 302-08. (Correction in BiometPika" 4l, (1954), 286.)
128
[27] Sacks, J.
A note on the sequential t-test. Annat. of
StatiBtiC8~ 36~ (1965), 1867-69.
HathematicaZ
[28] Sobel, M. and Wald, A.
A sequential decision procedure for
choosing ODe of three hypotheses concerniDa the unknawu
mean of a normal population. Annals of Nathf1fl'lfJtical
Statistics. 20, (1949). 502-22.
(29) Stein, C. A two sample test for a linear bypothesis whose pQWer
is independent of the variance. Annals of Math8matiaa't
~tistic., Z6. (1945), 243-58.
(30] Tweedie, M. C. X.
Statistical properties of' inverse Caussian
cllstrlbutiona, I. Annals of Mathematioat Statistics, 28,
(1957), 362-77.
(31) Walde A.
On cUIIlUlative BUllIS of random variablea.
Mathematical Stati,ati(JB. lS" (1944). 283-96.
(32) Wald, A.
8llqu.ntiaZ Analysis.
119
Annals of
John Wiley, New York, 1947.
© Copyright 2026 Paperzz