Puri, M.L. and P.K. Sen; (1966)On some optimum non-parametric procedures in two-way layouts."

I
I
.e
I
I
I
I
I
I
Ie
I
I
I
I
I
I
I
.I
ON SOME OPTIMUM NONPARAMETRIC PROCEDURES IN TWO-WAY LAYOUT*
by
Madan Lal Puri
l
and Pranab Kumar Sen
University of North Carolina, Chapel Hill
Institute of Statistics Mimeo Series No. 485
July 1966
*Work supported in part by the Army Research Office,
Durham, Grant DA-31-l24-G432.
1)
After August 31, 1966 at the Couran,t Institute of
Sciences, New York University, New York.
2)
On leave of absence from Calcutta University.
DEPARTMENT OF BIOSTATISTICS
UNIVERSITY OF NORTH CAROLINA
Chapel Hill, N. C.
Mathe~ticct!l
I
I
ON SOME OPTIMUM NONPARAMETRIC PROCEDURES IN TWO-WAY LAYOUT*'
.I
I
I
I
I
I
l
Madan Lal Puri and Pranab Kumar Sen
University of North Carolina, Chapel Hill
For the estimation and testing of contrasts in two-way layout, some optimum
nonparametric procedures based on Chernoff-Savage [2] type of rank order statistics are considered here.
The asymptotic properties of the proposed methods are
studied and compared with those of the least square method.
INTRODUCTION
1.
In a two-way layout with one observation per cell, the observable random
variables X, (i=l, ... ,c, ar=l, ... ,N) are of the form
10.
c
N
X.
10.
= J1+130.
+
+
'r,
1
e,
10.
f3
~
;
ar=l
0.
=
0
'
~
where J1 is the mean-effect, f3's are the block effects,
I
I
I
I
I
I
I
•-
I
effects and e. 's are the resudual error components.
10.
'r
i=l
i
'r'S
=
(1. 1)
0
are the treatment
It is assumed that e,
10.
(i=l, ... ,c, ar=l, ... ,N) are independent and identically distributed random variables (i.i.d.r.v.) having a continuous cumulative distribution function (cdf)
c
F(e).
Let 8 =
~
c
i=l
(where
£.'r,
1 1
~
i=l
J. = 0) be any contrast in
~
i=l
and let
c
c
vex, =
'r'S
1
£ .X.
1
10.
=8 +
~
i=l
£ .e, , ar=1, ... ,N.
1
10.
(1. 2)
c
We denote the cdf of
e=0.
*
£, • e.
by G (e) and assume it to be symmetric about
1 10.
c
i=l
Then Vl"",V are N i.i.d.r.v. having the cdf G (x-8). The least square
c
N
~
Work supported in part by the Army Research Office, Durham, Grant DA-31-124-G432.
1)
After August 31, 1966 at the Courant Institute of Mathematical Sciences,
New York University, New York .
2)
On leave of absence from Calcutta University.
I
I
-2-
(£.s.) estimate of
e is given by
'"
1
e =N
N
L:
N aF1
V
a,
= VN
(1. 3)
(s ay, )
'" is equal to a~/N.
and if a~ be the variance of G ' the variance of eN
c
Further,
'" is the minimum variance unbiased (MVU)
if F (or equivalently G ) is normal, eN
c
estimator of
e.
The object of the present investigation is to consider some
e
nonparametric estimators of
A
and to compare their performances with that of
eN.
These estimates are based on a celebrated class of rank order statistics due to
Chernoff and Savage [2J which we may pose as follows.
Iv11, ... , /vNI
smallest observation among
Z
N,a,
=0
for aF1, ... ,N.
Let ZN
= 1 if the a,-th
,a,
is from a positive V, and otherwise let
Then the desired rank order statistic may be expressed
as
1
N
= -N aF1
L:
where R_
~,a,
N drawn from a distribution
1jr*(x)
if x
> 0,
=
(1. 5)
{
\jr(x) being symmetric about
~O
0,
if x
<
and it is assumed that
~
0,
satisfies all the
regularity conditions of Theorem 1 of Chernoff and Savage [2J.
if
~(x)
is the standardized normal cdf,
score statistic.
~
I
I
I
I
I
I
(1. 4)
~,a, ZN'
,0.;
is the expected value of the a,-th order statistic of a sample of size
1Jr(x) - \jr( -x)
-.
In particular,
in (1.4) will be termed a normal
Along the same fashion as in [3J, these rank order statistics
will be used to derive suitable translation invariant robust estimates of
e.
Further, it will be shown that the use of the normal score statistic will lead
I
I
I
I
I
I
I
-.
I
I
I
,I
I
I
I
I
I
Ie
I
I
I
I
I
I
I
,-
I
to an estimator which is (asymptotically) at least as efficient as the l.s.
estimate, for all G (and hence F).
c
In this sense, the proposed method can be
regarded as an optimum one.
2.
RAW AND ADJUSTED ESTIMATORS OF CONTRASTS IN
By definition,
~(Vl-t, ... ,VN-t)
~'s
is • in t, and further it follows from
(1.2), (1.4) and (1.5) that the distribution of h (V -8, ... ,V -8) is symmetric
N
N l
about some known origin, which we denote by P .
N
Let us then denote (following
[3J)
(2.1)
and let
eN = ~(8*N + 8**)
N·
(2.2)
2
-
Proceeding then precisely on the same line as in [3J, it follows that 8
translation invariant robust estimator of
about 8.
N
is a
e and its distribution is symmetric
Also, the asymptotic relative efficiency (A.R.E.) of eN with respect
(2.3)
-1
where J(u) = V
(u) is the inverse of the cdf V(x): 0
~
u
~
1.
It is well
known (cf. [2J) that when V(x) is the standardized normal cdf, (2.3) is always
greater than or equal to one (uniformly in G).
c
Thus, as in the case of one-way
layout the use of normal scores leads to asymptotically optimum estimators of
contrasts.
For reasons to be explained below, we shall term
eN
as the raw or
I
I
-4-
unadjusted estimator of 8.
Now the estimators in (2.2) for different contrasts are incompatible in the
sense that they do not satisfy the linear relations satisfied by the contrasts
So as in ([1], [4]) it might be desirable to replace them by a
they estimate.
mutually compatible system.
of
~.
1
-
~.
J
i~j=l,
for
/
. =
1J,a.
~
for 0.=1, ... ,N.
... ,c.
X..
1a.
With this end in view, we first obtain the estimates
= 1" 1.. -T..J
-X.. , 6... .
Ja.
1J
We denote the cdf of
. = eia. - eja.'
1J,a.
and e~
e~.
1J,a.
(2.4)
by G(e) and note that as e. 's are
1a.
Thus from (1.1) and (2.4),
the cdf of X~.
becomes equal to G(x-6 .. ), for i=l j= 1, •.. , c.
1J, a.
1J
Thus, on defining
~. = (~. l""'X~. N)' we may use the same rank order statistic ~-, defined in
1J,
1J,
.~
+ 6c** .. ) of 6 .." i < j= 1, •.. , c.
N,1J
1J
(2.5)
Now" as in [1] and [4] , we define
Y.
l'
=
c
Y.. = 0, for i=l" •.. , c"
L: Y
c
11
j=l iY
1
(2.6)
and define the compatible or adjusted estimator as
Z .. = Y.
1J
1.'
-Y.
These estimators satisfy the Same
c
We can then define e = L: I.~. =
i=l 1 1
of it we may consider
J'
for
i~j=l" ... ,c.
/
(2. 7)
linear relations which are satisfied by 6 .. 's.
c
C
1J
L:
L: d .. 6 .. , and as a compatible estimator
i=l j=l 1J 1J
c
c
L:
L: d .. Z .. = e'cN (say).
i= 1 j= 1 1 J 1. J
1
I
II
I
_I
(1.4), and proceeding as in (2.1) and (2.2) arrive at the estimate
Y. . = ~(6* ..
1JN,1J
I
II
Let us denote
i.i.d.r.v., G(e) is symmetric about zero for all F(e).
""1J
-.
(2.8)
I
I
I
I
I
I
I
-.
I
I
I
.I
I
I
I
I
I
Ie
I
I
I
I
I
I
I
•-
I
-5-
It may be noted that similar studies have been made by Lehmann [4J, using, in
particular, the Wilcoxon's signed rank statistic, and our proposed methods generalize his findings to a more wider class of rank order statistics.
A.....
To make the
,.,
large sample comparison of the properties of 8 , 8 and 8 ' we shall require
N N
cN
certain limit theorems which we shall consider in the next section.
3.
ASYMPTOTIC NORMALITY OF eN and ScN
As a basis for this study, we first consider the following.
THEOREM 3.1.
1
The joint limiting distribution of the random variables N2(y .. -6 .. ),
1J
1J
1 $ i < j $ c, is a (~)-variate normal distribution with zero means and a co-
variance matrix L. =
(0..
»),
1J" rs
where
A2/B2, i f i=r, j=s, ir j
A/G)/B 2, i f i=r, jr s , ir j
o ..
1J, rs
=
A/G)/B2, i f i=s,
jrr,
(3. 1)
irj
otherwise
0,
where
A2 =
1
J
00
J 2 (u)du, B =
J
0
d
dx J[ G(x) J dG(x),
(3.2)
-00
and
00
J J
A/G) =
-00
and where J(u) =
~
-1
00
J[G(x)J J[G(y)J dG*(x,y),
-00
(u) (0 $ u $ 1),
G*(x,y) is the joint cdf ofe~.
~(x)
and e~1J
1J,o. - -
being symmetric about zero, and
.LXt,o.
UrI), whose marginal cdf's are
G(x) and G(y), respectively.
The proof of the theorem rests on the following .
(3.3)
-6 -
Suppose that the random variables x~.
LEMMA 3.2.
~J,ct
_1
have the distribution specified
by the cdf G(x + N 2 a .. ), 1 $ i < j $ c (i.e., G is fixed but 6 .. , defined by
~J
(2.4), is equal to -N
-
~J
_..1
-
2 a ..,
.. be
where a .. 's are real and finite), and let h
--
~J
--,
N ~J -
~J
I
I
~
-I
I
defined as in (1.4) (with xtj,ct' OF 1, ... ,N) ~ ~(x) ~.5) satisfying the
assumptions of theorem 1 of Chernoff and Savage [2J.
1
[N2(hh ., - ct.
-~, ~
J
~
J,
1
J
< i < j -< c] where ct ~..J is defined in
-
r =
....,
«1'..
~J,
rs
1' ..
~J,
»),
rs
c
where
M2
4
i f i=r,.
j=s, if j
~\iG),
i f i=r,.
jfs, if j, rfs,
-~AiG),
i f i=s,.
=
(3.4)
a ,
jfr, if j,
rfs.
otherwise.
The proof of this lemma is given in the appendix.
Proof of theorem 3.1.
1·
By (9.2) of [3J,
1
~m P {N2 (Y -6 ) $ a .. for all 1 $ i
N-?oo
ij ij
~J
lim P
t
= N~ N{N (~, ij - Jl ) $
N
< j $ c}
(3.5)
a
for all 1 $ i < j $ c}
where
00
Jl
~*
N
=
f ~*-l[G(x)
a
- G(-x)] dG(x),
(3.6)
being defined in (1.5) and P indicating the probability which is computed
N
_..1
for the sequence of shifts 6 .. = N
~J
2
a ..,. 1 $ i < j $ c.
~J
these sequences, it is easy to show that as
N~
II
(5.4) have a (2 ) -variate
(~ N~) ~w~i. .: ;t; ; h-=a_n;:.u=-=-l : ;. . ;m: .; e; .;a: .:n; . . ;v:. .;e:.; c:. .;t:.; o;.: r~a:.:;n:.; ;d;. . : a~c,;:;.o. :.v.:.a=-r=-ia.=.n:.:,c=e
limiting normal distribution
matrix
Then the random variables
Furthermore, for
I
I
I
I
ell
I
I
I
I
I
I
I
-,
I
I
I
-7-
.e
I
I
I
I
I
I
Ie
I
I
I
I
I
I
I
•e
1
N2(11
N
-
0., .)
1.J
where B is defined in (3.2) .
~
a, ,B, for 1
1.J
:s
:s
i < j
(3. 7)
c,
Hence, it follows that
1
P {N2(y,
,-6, ,)
a, , for all 1
:s
i < j
:sc}
lim
1.
= N=co P {N2(~, ij -l1 ) < 1.B a, , for 1
N - 2
N
1.J
:s
i < j
:s
lim
N~
1.J
1.J
:s
1.J
(3.8)
c}
Now, by lemma 3.2, the right hand side of (3.8) is equal to Q(!a
12
B, ... ,!a _ cB),
c 1,
where Q(~) is the (~)-variate mu1tinorma1 cdf having a null mean vector and a covariance matrix
r,
defined by (3.4).
The rest of the proof of the theorem is
straight forward and is omitted.
Hence, the theorem.
1
THEOREM 3.3.
The joint distribution of [N2(Zic-6ic)' i=1, ... ,c-1] is aSymptotically
normal with a null mean vector and a covariance matrix
r
,...
=
«y,1. J,»,1., J-'-1, . . . , c - l'
where
cr 2 (l-p)" ir j =1, ... ,c-1,
(3.9)
o
and where
(3.10)
A2 , B and A/G) being defined in (3.2) and 0.3), respectively.
Proof.
--
It follows from (2.5), (2.6) and (2.7) that Z, ,IS are linear functions
1.J
1
Further, from theorem 3.1, it follows that Var{N2(y, -6, )} = cr 2 and
of Y, ,'s.
1.'
1.J
1
1.'
1
0
and COV{N2(Y, -6, )" N2(Y, -6, )} = pcr 2 , where cr 2 and p are defined by (3.10).
1.'
1.'
J'
J'
0
o
The rest of the proof of the theorem follows directly from theorem 3.1 .
Hence the theorem.
I
I
-I
-8-
It follows from theorem 3.1 and theorem 3.3 that the A.R.E. of Z.. with
1.J
respect to Y
(in the sense of reciprocal of the ratio of their asymptotic
ij
cariances) is equal to
e Z ••, Y .•
1.J
(3. 11)
1.J
where A2 and AJ(G) are defined by (3.2) and (3.3).
independent of 1
THEOREM 3.4.
~
i < j
If J(u) =
~
c.
Incidentally, (3.11) is
We have then the following theorem.
~-l(u), 0 ~ u ~ 1; and ~(x) is symmetric about x=O. then
under the conditions of theorem 1 of Chernoff and Savage [3J.
e z.., Y.
1.J
.
2: 1,
for all G
E
~,
~
t::,
4"
E
1.J
(3.12)
~ being the class of all absolutely continuous cdfts which are symmetric about
x=O.
Proof.
It follows from (3.11) that if we can show that
(3.13)
(3.12) will follow immediately.
To prove this, let us write
Z*icx, = X.1.cx, - Xi+l,cx, for i=l, •.. , c-l and Z*ca, = Xccx, - Xlrv~'
(3. 14)
and let
Y~
1.cx,
=
J[G(Z~ )],
1.cx,
i=l, ... , c,
y*
• cx,
=
c
*
2: Y.
i= 1 1.cx,
(3.15)
I
I
I
I
I
I
el
I
I
I
I
I
II
Then from (3.2), (3.3) and some simple algebraic manipulations it follows that
E[Y*.cx, ]2 = c A2[1 - 2A J (G)!A2] 2: O.
(3.16)
Ii
ell
I
I
I
.I
I
I
I
I
I
Ie
I
I
I
I
I
I
I
.I
-9-
(3.13) readily follows from (3.16).
Hence, the theorem.
In general, if 8 =
c
L.
£
.~.
~
be any contrast, we can rewrite it as
~
c
i=l
L. d . .f'::. •• and consider the estimates
L.
. 1 ~J ~J
i=l J=
c
c
L.
c
c
~
c
~
and L.
L. d.. Z.. =
L. d .. Y .. = 8
8 cN'
UN
i=l j=l ~J ~J
i=l j=l ~J ~J
(3. 17)
and then with the aid of theorems 3.1 and 3.3, we readily arrive at the fo110wing.
THEOREM 3.4.
The A.R.E. of ScN with respect to SUN is given by (3.11) and (3.12)
c
for all (£1'''.'£ ): L. £.=o}.
c
1 ~
~
Let us next compare the compatible estimate 8
and the least square estimate
cN
c
eN in (1.3). Rewriting 8 N as L. t.y. , we get from theorems 3.1 and 3.3 that the
c
i=l ~ ~.
-!. asymptotic variance of N2(8 -8) is equal to
cN
c
( L.
(l-p)
(3. 18)
i=l
where (12 and p are defined in (3.10). Also, (12 defined in (2.3), is nothing
o
c'
c
but ( L. £~) (12, where (12 is the variance of
. , in (1. 1). Thus,
HI,
~
i=l
c
Now, the variance of the cdf G(x) of X. -X.
HI,
Jcx,
(3.19)
is nothing byt 2(12, and hence, the
first factor on the right hand side of (3.19) is the usual efficiency-factor of
the allied one sample rank order test with respect to student's t-test, while
-10-
the second factor is by theorem 3.3, is always at least as large as 1.
A.R.E.
ScN' SN ~
Hence,
2B2 a2 /A 2 •
(3.20)
In particular, for normal scores, (3.20) is greater than or equal to one for
all G, and hence, the compatible normal score estimator e""
(i.e., with J(u) as
cN
the inverse of the normal cdf) is asymptotically at least as efficient as the
"" for arbitrary Jl, ... ,J ,
least square estimator eN'
c
c
~
1
J.=O.
Finally, from (2.3)
~
and (3.19),
A.R.E.""
....
S
ScN' N
,
c
=
00
J
a2 [
c
-00
d
c
C
where G is the cdf of V , defined by (1.2), whose variance is
c
(3.21)
--d J[G (x)]dG (x)]2
x
ex.
02•
c
Now, (20 2 B2/A2)
is the A.R.E. of the test based on h in (1.4) with respect to the student's
N
00
d
t-test when the parent cdf is G(x), while a~[ J dx J[Gc(x)] dG (x)]2/A2 is the
c
-00
same A.R.E. when the parent cdf is G (x).
c
Thus, (3.21) will in general depend on
both the cdf's G(x) and G (x), unless the two are identical.
c
parent cdf F(x) of
e.i l l
However, if the
is normal, then both G(x) and G (x) will also be normal,
c
and hence (3.21) will be equal to
(3.22)
4.
CONFIDENCE INTERVAL AND TESTS FOR CONTRASTS
We have so far considered the problem of estimation of contrasts using the
rank order statistics of the type (1.4).
II
II
ell
The corresponding problems of testing
II
II
I
I
I
I
_I
I
I
I
I
I
I
I
-I
I
I
I
.e
I
I
I
I
I
I
Ie
I
I
I
I
I
I
I
.e
I
-11-
and confidence intervals for contrasts are briefly sketched below.
c
Using V
~ a;=1~
•.. ,N, defined by (l.2)., the problem of testing H : 8= L: 1, .... =8 ,
~
0
i=l ~ ~ 0
reduces to that of testing the symmetry of the distribution of V -8 around zero.,
~
0
and hence, we may use the rank order test based on the statistic h (V -8 ' ... '
N l o
1
V -8 )' defined in (1.4).
N o
For large N,N2[~(Vl-8o'... ., V -8 o ) - ~NJ has a normal
N
distribution with zero mean and variance ~2, where A2 is defined by (3.2) and
1
-1
~N = f J(u)du, j(u) = \jr
(u), \jr being defined in (1. 5). Thus~ the test can be
o
carried out using the standard normal tables.
Again, by virtue of this
lim
l~
N=oo
where
"~/2
is the upper
lOO(~/2)%
resu1~
(4.1)
point of the standardized normal distribution.
Thus, if we let
_1
8 L, N = Inf(8:
h (V -8, ... , V -8) = ~N + tA"~/2N 2}
N l
N
_1
8u, N = Sup{8:
~(Vl-8, ... ,
VN-8) =
~N
- -1.A..
I N 2}
2
~ 2
(4.2)
,
then proceeding on the same line as in Sen [7J, we get that
(4.3)
which is our desired confidence limit for 8.
Now the above test and confidence interval are subject to same criticisms
A
as 8
N
in section 2.
We may overcome these to some extent by the following pro-
cedure.
A
1
A
as in (2.8), we have N2(8 -8) asymptotically normally discN
cN
c
tributed with zero mean and variance (L: £~) (12 (l-p), (cf. theorem 3.3 and (3.18),)
i=l ~
0
On defining 8
II
II
-12 -
02
where
o
estimate
X~.
and p are defined in (3.10).
o~(l-p)
Thus, the only problem remains is to
i.e., AJ(G) and B, defined by (3.2) and (3.3).
Now, defining
as in (2.4) and working with the statistic hN(X~. 1-6 .., •.• ,X~. N-6.. ) in
1J,a.
1J,
1J
1J,
1J
(1.4), we get on proceeding precisely on the same line as in (4.1) through (4.3)
that
I
(4.4)
P {6 .. L N < 6. . < 6. . U N 6. .} = 1-a.,
1J"
1J 1J"
1J
where 6 .. L Nand 6 .. U N are defined as' in (4.2), for 1 $ i < j $ c.
1.J, ,
1J, ,
Further,
it follows from the results of Sen [7] that
1
A
B.1.J,
. N = 2A
'!
. U, N - 6.1.J,
. L, N)]
a./2/ [ N2 (6.1J,
is a consistent estimator of B, in (3.2), for all i < j = 1, ... ,c.
Consequently,
we may consider the pooled estimate
c
L.
A
(4.5)
B .. N'
1J,
which will be a translation-invariant consistent estimator of B.
Finally, the
following is a convenient and consistent estimator of AJ(G).
Suppose for any distince (i,j,k), we consider the N paired variables
, X'l:
k
1,a.
1.J,a.
) fora.=l, •.. ,N.
A
Let then
1
-Y .. ) $ x],
GN(i, j) (x) = -N[Number of (X'I:.
1.J,a. 1.J
(4.6)
where Y.. is defined by (2.5), for i < j=l, ... ,c, and let
1J
A
G~(i: j, k) (x, y) =
for ir jrk=l, ... , c.
1
~N Number of {(X~.
1J,a.
II
I
I
I
I
I
el
i<j=l
(X'I:.
e.
-y . .),(X~k
1.J
1,a.
-Y. )} $ {x,y}],
1k
(4. 7)
I
I
I
I
I
I
I
-.
I
I
I
.e
I
I
I
I
I
I
-13-
ex,
Let also aN,ex, = jN(N") be the expected value of the ex,-th order statistic
~(x)
satisfies the conditions of theorem 1 of [2J.
.I
defined in (1.5), where
We then consider the sta-
tis tic
1
L .. k N
~:J"
where R. .
~J,ex,
ifj=l, ... ,c.
define Z. .
~J,ex,
is the rank of X~.
N
= - r: a
~J,ex,
N ar=l
N,Rij,ex,
a
(4.8)
N,Rikex,
among all (X'iJ'. 1'" .,X~. N)' for ex,=l, ... ,N and
~J,
~J,
Since, the ranks remain invariant under change of origin, if we
= X'iJ'.
~J,ex,
- 6 .. for ar= 1, ... , N, the rank of Z. .
~J
Z.. N will also be R..
~J,
~J,ex,
among all Z.. 1' ... '
~J,ex,
for all ar=l, ... ,N, ifj=l, ... , c.
~J,
Let then
1
~(i,j)(x) = N[Number of Zij,ex, ~ xJ,
Ie
I
I
I
I
I
I
I
~(x),
in a sample of size N drawn from the distribution
1
.. , Z.k
)
~(i:j,k)(x,y) = -N[Number of (Z ~J,ex,
~ ,ex,
for ifjfk=l, ... ,c.
00
(4.9)
~ (x,y)J
We can then write (4.8) equivalently as
00
J J
-00
Now, Z..
~J,ex,
(4.10)
00
has the cdf G(x), for all ifj=l, ... ,c, and (Z ..
, Z.k ) has jointly
~ ,ex,
~J,ex,
the bivariate cdf G*(x,y), it can be shown by routine analysis that under the
conditions of theorem 1 of Chernoff and Savage [2J, (4.10) converges in probability
to Aj(G), defined in (3.3).
Thus, we may pool the estimators {L .•. k N,ifjfk=l, ... ,c}
~.
J, ,
into a single measure (by unweighted arithmatic mean), and use the same as an
estimate of Aj(G).
Once Band Aj(G) are estimated, we have no difficulty in proc
viding a confidence interval to 8 = r: t.~. or to test for any hypothetical value
i=l ~ ~
of the same.
-14-
We have so far considered the case of single deservation per cell.
Along
the same line as in Lehmann [4J, the results derived in this paper can be readily
extended to the case of several observations per cell.
For intended brevity, the
details are omitted.
APPENDIX
5.
Proof of lemma 3.2.
X~.
1.J,
Let G ., (x) be the sample cdf of N observations
N,1.J
N of which the population cdf is G.. (x)
1.J
=
H.. (x)
1.J
IL .. (x)
-~,. 1.J
=
=
G(x-6 .. ).
1.J
1.J,.
1' ... '
Denote
G.. (x)-G .. (-x)
1.J
X~.
(5.1)
1.J
Q- .• (x)-G
.. (-x).
N,1.J
~, 1.J
(5.2)
II
II
ell
I
I
I
I
I
I
_I
Then, proceeding as in [6J, we can write
4
h. . = a.. . + BIN .. + B2N .. + I: C N ..
1.J
1.J
,1.J
,1.J
r=l r ,1.J
(5.3)
where
00
a. .. =
1.J
J
0
J*[H .. (x)J dG .. (x)
1.J
(5.4)
1.J
00
BIN"
, 1.J
= 0J
J*[H .. (x)J d(G
~,
1.J
.. (x)-G .. (x»
1.J
(5.5)
1.J
00
B2N ..
,1.J
= 0J
[R_ .. (x)-H .. (x)JJ*·[H .. (x)J dG .. (x)
-~,
1.J
1.J
1.J
1.J
(5.6)
_1
and the C-terms are all
0
p
(N 2) (cf. [5J).
1
1
The difference N2(h .. -p. .• ) -N2(B
1.J
1.J
.. + B
. .) tends to zero, in probability,
2N,1.J
lN,1.J
I
I
I
I
I
I
I
e ..
I
I.
I
I
I
I
I
I
I
Ie
I
I
I
I
I
I
I
I·
I
-15-
and so the vectors
1
1
[N2(h .. -}l . . ) ; l~i<j~cJ and [N2(B
..+B
.. ), l~i<j~cJ
lN,1.J ZN,1.J
1.J 1.J
have the same limiting distributions.
Thus to prove this theorem, it suffices to show that for any real d ..,
1.J
1
..+B
.. ) has normal distribution
1 ~ i < j ~ c, not all zero, N2 I:I: d .. (B
1.J lN,1.J ZN,1.J
i<j
1
in the limit. Now proceeding as in [6J, we can express N2I: I: d .. (B
..+B
.J
. . 1.J lN,1.J ZN ,1.J
1.cJ
as the sum of independent and identically distributed random variables having
finite first two moments.
The proof follows.
To compute the variance-covariance terms of BIN ..+B
.., we note, (by
,1.J ZN,1.J
integrating BIN .. by parts, adding B
.. to it, and using (5.1) and (5.Z»
ZN,1.J
,1.J
1
BIN'
..
,1.J.+BZN,1.J
N
= N a;=l
I:
B.. (x.. )
1.J 1.J,ct
that
(5.7)
where
00
B.. (x) = I
1.J
0
[G
.. (x)-G .. (x)JJ'*[H .. (x)J dG .. (-x)
I ,1.J
1.J
1.J
1.J
(5.8)
00
-o
I
Since E B.. (x)
1.J
=
[G
0, therefore
Var(B
=~
II
O<x<y<oo
.. (-x)-G .. (-x)JJ'*[H .. (x)J dG .. (x)
I ,1.J
1.J
1.J
1.J
..+B
.. )
lN,1.J ZN,1.J
= -N1
Var B.. (X .. )
1.J 1.J
G.. (x)[I-G .. (y)JJ'*[H .. (x)JJ'*[H .. (y)J dG .. (-x)dG .. (-y)
1.J
1.J
1.J
1.J
1.J
1.J
(5.9)
+
~
Z
- -N
II
O<x<y<oo
00
I I
o 0
G.. (-y)[l-G .. (-x)JJ'*[H .. (x)JJ'*[H .. (y)JdG .. (x)dG .. (y)
1.J
1.J
1.J
1.J
1.J
1.J
00
G.. (-y)[I-G .. (x)JJ'*[H .. (x)JJ'*[H .. (y)J dG .. (-x)dG .. (y)
1.J
1J
1J
1J
1.J
1J
II
II
ell
-16-
Similarly
Cov(B
1
..+B
.., B
+B
)
1N,rs 2N,rs
1N,1.J 2N,1.J
N
N
= N2 E
E E[B .. (X.. )B (X
A)]
0;=1 /3=1
1.J 1.J,a. rs rs, tJ
=
1
(5.10)
N
.. (X.. )B (X
)]
N2 E E[B 1.J
1.J,a. rs rs,a.
0;=1
1
= N E[B .. (X.. )B (X
)]
1.J 1.J,a. rs rs,a.
The covariance matrix r = «~ij,rs»
obtained by taking limits of N var(B
B
+ B
) as
2N, rs
1N, rs
defined in (3.4), (3.2) and (3.3), is
.. + B
. .) and N Cov(B
.. + B
..,
2N,1.J
2N,1.J
1N,1.J
1N,1.J
N~.
REFERENCES
[1]
Bhuchongkul, Subha and Puri, Madan Lal, "On the Estimation of Contrasts
in Linear Models", The Annals of Mathematical Statistics, vol. 36,
1965, pp. 198-202.
[2]
Chernoff, Herman and Savage, 1. Richard, "Asymptotic Normality and
Efficiency of Certain Nonparametric Tests", The Annals of Mathematical
Statistics, vol. 29, 1958, pp. 972-994.
[3]
Hodges, J. L., Jr., and Lehmann, E. L., "Estimates of Location Based on
Rank Tests", The Annals of Mathematical Statistics, vol. 34, 1963,
pp. 598-611.
[4]
Lehmann, E. L., "Asymptotically Nonparametric Inference in Some Linear
Models with One Observation Per Cell", The Annals of Mathematical
Statistics, vol. 35, 1964, pp. 726-734.
[5]
Hollander, Myles, "An Asymptotically Distribution-Free Multiple Comparison
Procedure-Treatments vs. Control", The Annals of Mathematical Statistics, vol. 37, 1966, pp. 735-738.
I
I
I
I
I
I
el
I
I
I
I
-.
I
I
.e
I
I
I
I
I
I
Ie
I
I
I
I
I
I
I
.e
I
-17-
[6 J
Puri, Madan L., and Sen, Pranab, K., "On the Asymptotic No rma l i ty of One
Sample Chernoff-Savage Test Statistics", Journal of the American
Statistical Association, (Submitted)
[7J
Sen, Pranab Kumar, "On a Distribution-free Method of Estimating Asymptotic
Efficiency of a Class of Nonparametric Tests", The Annals of Mathematical
Statistics, vol. 37, (December issue).