I
..I
I
I
I
I
I
I
SIMULTANEOUS CONFIDENCE BOUNDS ON A SET OF LINEAR
FUNCTIONS OF LOCATION PARAMETERS FOR DEPENDENT AND
INDEPENDENT l«>RMAL VARIATES
by
C. G. Khatri
University of North Carolina and Gujarat University
Institute of Statistics Mim.eo Series No. 417
December
1964
--I
I
I
I
I
I
I.
I
I
This research was Bupported by the Mathematics DiVision
of the Air Force Office of Scientific Research Contract
No. AF-AFOSR-84-63.
DEPAR'DCENT OF STATISTICS
UNIVERSITY OF M>RTH CAROLINA
Chapel Hill, N. C.
I
..I
I
I
I
I
I
I
ft
I
I
I
I
I
I
I.
I
I
Simultaneous Confidence Bounds on a Set of Linear Functions
of Location Parameters for Dependent and Independent Normal Variates
By
C. G. Khatri
University of North Carolina and Gujarat University
1.
Introduction and Summary.
In practical situations, one is generally
faced with multivariate problems in the form of testing the hypotheses or
obtaining a set of simultaneous confidence bounds on certain parameters of
interest.
We shall consider here the variates under stUdy to be normally
distributed.
A lot of work on the univariate and multivariate normal pop-
ulations for the simultaneous confidence bounds on the location parsmeters
has been done.
(See references, not necessarily exhaustive.)
Our aim in
this paper is to give shorter confidence bounds on a given set of linear
functions of location parameters when this set is previously chosen for
study.
For the univariate case, Dunn [6,8] using the Bonferroni inequality,
obtained shorter confidence bounds when the number of linear functions is not
too large.
We may note that Nair Ell], David [5], Dunn [6,7,8] and Siotani
[22,24] have studied the closeness of the Bonferroni inequality while deriving
the percentage points of certain statistics in univariate and multivariate
normal cases.
In this paper, we improve the Bonferroni inequality in all
the situations considered by Siotani [22,23,24] and Dunn [6,7,8], and point
out various uses of these results in obtaining simultaneous confidence bounds
on linear functions of means (or location parameters) with confidence greater
than or equal to l-a where a is the size of the test.
Since our results are
extensions of Dunn [6,8], Siotani [22,24] and Banerjee [2,3], their comments
1
I
on the shortness of the confidence bounds apply to our cases too.
note that we require the percentage points of t,
:f
We may
or F distributions, in
some cases maximum. studentized t-statistic [12] and maximum Hotelling
[22,23,24] when their tables are available.
rI-
In all other cases, we use the
sharPer inequalities developed here, and for all practical purposes, they
can be used when no better ones are available.
Lemma 1.
Let D be a convex set symmetric about the origin.
Then
(1)
is a monotonic decreasing function of
(p.d.), ~'
= (~, ... ,
~), and
Iyo I
= (b l ,
Q'
if t: p X P is positive definite
••• , b p ).
This follows from the following result proved by Anderson [1]:
Let D be a convex set symmetric about the origin and let fee) be a
function of ~'
(iii)
= (Xl'
(~If(~) ~
••• , Xp ) such that (i) f(~) ~ 0, (ii) f(~)
u) is a convex set for every u,'and (iv)
Then
= f(-~),
I f(~) d~ <
00.
D
Jf(~ f-YoE) d~
>
for 0 < Y < 1.
-
0-
D
For lemma 1,
f(~) = exp[- ~ ~' ~-l ~], and see that all the conditions for
(2) are satisfied.
Lemma 2.
If g(~) and h(~) are functions of real vector variable ~ such that
for any two points ~l and ~2' either g(~ ~ g(~2) and h(~l) ~ h(~2) or
g(~l) ~ g(~2) and h(~l) ~ g(~2)·
J
I
I
I
I
I
I
I
--I
I
I
I
I
I
.-I
Then
2
I
I
..I
I
I
I
I
I
I
--I
I
I
I
I
I
where E stands for expectation of !.
~.
Then the condition of lemma 2 gives us
Hence, we get
E g(:il ) h{;11) + E g(;12) h{:i ) ~ E g{;12) h{:i ) + E g(;11) h(:t2 )
2
l
and this proved lemma 2.
Corollary 1.
Let f = f(x) be a function of random variables !' = (Xl' ••• x )
p
and let E stand for the expectation with respect to Xl' ••• , x '
p
Then if
s + t = r,
(4 )
E f
Proof of
(4)
r
-> (E f
s
t
--
if f > 0 for all x.
) (E f )
can be qerived from lemma 2 by using f
2S
{!) • g{!) and h{!) = f2t{~)
and using E{f2) ~ (E f)2. (5) is the consequence of (4) by renaming f2 by f.
This can be derived from the result (15.4.6) given by Cramer [4, p. 176J.
Corollary 2.
Let!i: p X 1 (i = 1, 2, ••• , r) be independent and identical
vector variates and be distributed independently of :t: q)( 1.
Then
r
(6)
pr[h(~, :t) ::: c i ' i = 1, 2, ••• , r] ~R pr[h{!i' :t) ::: c ]
i
i-l
and
(7)
Proof.
I.
I
I
Let ;11 and ;12 be any two independent and identical vector variates.
Note than when y is fixed
Pr[h(!i' :t) ~ c i ' i = 1, 2, ••• , rl~
3
r
=n
i-l
[f(:t, c i )]
I
where
f ( ;t, c i) =rPr (h (!i' ;t) ::: c il ~
~
0 for all ;t.
The functional
form is the same for each x.
being independent and identical vector variates.
-J.
Hence the condition of lemma 2 is satisfied, Le., if f(~l' c ) ~f(;t2' c )
i
i
for ~1 ~ ;t2' then f(;¥l' c j ) ~ f(;t2' c j ) for i ~j.
Hence using lemma. 2, we
get
r
r
E[ 71 f(;t, c )] ~ n E f(;t, c ) ,
i
i
i=l
i=l
and this proves corollary 2.
'!'3eorem I.
Let!' = (xl' ••• , Xp ) be distributed as multivariate normal with
zero mean vector, D = D (Xl) and D (X , ••• , X ) be convex sets symmetric
p
2 2
1
1
about origin.
Then
and let us partitioned it as
(i'
.... 1
?;;22
Now, let us write
(i = 1, 2, ... , p)
( 8)
where Yo' Y , and (Y , •.• , Y ) are independently distributed as normals with
p
2
1
First of all, let us assume that
= V(Y1) = V(Y j )
= 1; j
= 2,
1
and b
i
t:
p X P is p.d., and let V(y )
o
••• , P and cov(Y , y ,)
j
Let ~ = (6 jjt ): (p-1) X (p-1) ,
we can determine a
I
I
I
I
I
I
I
--I
Proof.
means zero.
J
(1
j
= 6 jj "
j ~ j'
witI1 Ijjt = 1 (j = 2, ... , p).
= 1,
2, ••• , p) from
4
= 2,3,
T11en,
•.• , p.
I
I
I
I
I
.-I
I
I
..I
I
I
I
I
I
I
--I
I
I
I
I
I
I.
I
I
for i
= 1,
2, ••• , Pj j,j'
= 2,
3, ••• , p.
The solution of (11) is
(10)
2
2
1:
2
6 jj , • (ajj,a1 - a1ja1j,)/(ajja1-a1j)(aj'j'ai_aij,)}2
for j,j'
.~,
3, ... ,
condition on a
l
ensures that
~
is p.d.
,-1
~l ~22 ~1
2
< a l < 1. The
Now using (10), it is easy to see
p and for any a, such that
that
(11)
where
1
g(yo) • (21fb 1 ) 2'
J exp[- ~(x1-a1YO)2/bi]dX1
D
1
and
in which~' = (x 2 , ••• , xp )' ~' • (a2 , ••• , ap )' ~1 • (6 1 ,jj') and i,jj,=6 jj ,b j b j ,
for j, j' = 2, 3, ••• , p. Lemma 1 shows that g(y0
) and0
h(y )1
are motonica11y
\
-.
decreasing functions of
Iyo I,
and so using lemma 2 in (13), we get
Now, it is easy to verify that
This proves the theorem when
~
is nonsingular.
When
~
is singular, let r be
the rank of .; and without loss of generality, let us assume that ~11: r X r
5
I
J
is non-singular,
t =
~ll
( ~22
E12 )
E'
-12
r
,
-1
and &22 "" &12 t ll &12·
r
p-r
p-r
We have the probability inequality
where w ' w , ••• , wp are normally distributed with zero means and covariance
2
l
matrix
for very very small q > O.
Since (15) is true for all positive values of q, so taking the limit as
q
-> 0+ , we get
--I
This proves the theorem I completely.
Corollary 3.
Let ~i (i
= 1,
2, ••• , n) be n independent observations from
the multivariate normal with zero means, D (X ' ••• , ~) = D and D
l
2
l ll
(x ' 1
ij
= 2, 3, ••• ,
p and j
= 1,
regions symmetric about origin.
Eroof.
2, ••• , n) = D be sectionwise convex
2
Then
Let g(~i) be the density function of ~i (i
gl(i) = gl(Xli ) and g2(X ' ••• , ~i)
2i
of xli and (X2i ' ••• , ~i),i
= 1,
I
I
I
I
I
I
I
= g2(i)
2, ••• , n.
6
= 1,
2, ••• , n) and
be marginal density functions
Then
I
I
I
I
I
.-I
I
I
'-I
I
I
I
I
I
I
--I
I
I
I
I
I
I.
I
I
d!.t ... ,
~.
= 1,
Now since D is section-wise convex and symmetric in xli(i
l
it is convex and symmetric in xII when others are fixed.
2, ••• , n),
Similarly using the
argument for D , we use Theorem I for integration over !l and get
2
Now using the same argument for
Corollary 4.
~, then~,
••• , we get finally
If Xl' ••• , ~ are distributed as multivariate normal with zero
means, then
This follows immediately from theorem I.
This was conjuctured by
Dunn [6].
Corollary 5.
If!j == (xlj ' ••• , xpj )' j == 1, 2, ••• , n are n independent
observations from a multivariate normal with zero means and Aj
:? 0 (j-l,2, ••• ,n),
then
n
2
Pr( E A xO j
j ...l
J.
SCi; i ... 1, 2, ... , p)
P
:? n pr(
i-I
This follows from corollary .3 or from theorem I.
1
n
2
ij .5 c i ).
]I A"x
j-l
tJ
I
Theorem II.
J
Let (~, ••• , !r) be distributed as nomal with zero means and
cov(!i' !j) = ~ij ~ where~:
p X P is an unknown p.d. matrix and ~ii
I
(p, n,
and be independent of (!l' ••• , !r). Then
I
I
and
I
r
1 n+r p -1 j' i 21'-1
-'2 n+r )
(J.7)
i=1,2, ••• ,r)
Y
(l+y)
dy)
i=l
I
where
I
Proof. When p
1 and ~i<1 = J~ii~ jj a{~j for -1 :s: a i :s: 1, (16) was proved by
I
Dunnet and Sobel [9J. (J.6) is a sharper inequality than Bonferroni inequality
til
whose closeness in some particular cases was caasidered by Siotani [22,23,24].
For the proof of (16) and (17), we may consider • I, ior thi statistic under
I
co~sideration is invariant under the transformations~-2 § ~~ ---> § and
t 2 !i ---> !i (i • 1, 2, ... , r). ~ow let us consider the conditional probI
ability
I
pr(!~§-l~i/~ii~Cij i=1,2, ••• ,r I§) • pr( ~ AjY~jJ~ii~Cij i=1,2, ••• ,rl§)
j-l
I
where Aj are the ch. roots of §-l, and (Ylj' Y2j' ••• , Yrj)' (j • 1, 2, ••• , p)
are independently distributed as normal with zero means and covariance matrix
I
Hence applying corollary 5,
get
I
(i = 1, 2, ••• , r) are known; and let S:
p X P be distributed as Wishart
~)
c
1
l(
~ ~[B('2P, 2-)]
0
0:
~
we
where
~i:
pXl (i = 1, 2, ••• , r) are independent normals with zero means and
8
.-I
I
I
I'I
I
I
I
I
I
lit
I
I
I
I
I
I
I.
I
I
covariance matrix
!p.
Now, the right hand side of (18) can be written in two ways as follows:
r
i~l[f(§,ci)]
(19)
where
f(a,~>=.
p.d., and ~ is N(Q,
1
-;:\'Pr
(21f) 2
(20 )
1
~
lr),
PrCli,§-lZ;
~
c1,15) '2 0 , for a.ll S
or
r
IAI 2 n [g(l,
i=l
c i )] where g(I"ci )
=J
1
exp
[~t.2~]~
~'~ci
~
0 for all § p,d.
Using (19) or (20 \ in (18), we get
1
pr(!:i. §- !i/13ii~Ci; i=1,2, ... ,r)
(21)
2:
r
r
[Ell f(§,C )] or [W ER g(Sl'c )]
i-1
i
1
i=l
p { n+r-i+1 / n-i+l L -jPr
where w = n r(
2
) r( 2 ) J'H
, §l is Wishart (p, n+r, I ) while §
1
i~
~
is Wishart (p, n, ~). Now using corollary 2, or lemma 2, we get the results
as mentioned in (16) and (17) after some simplifications.
We conjecture that
(17) will be slightly better than (16), but involves more computations.
Theorem III.
If' (~, ••• ,
X
p
) are distributed as multivariate normal with zero
means, then
if the correlations Pij = aiaj for i;' jj i,j • 1,2, ... , p and -1 ~ a ~ 1.
i
Proof.
Without loss of generality, we shall assmme that V(x )
i
1:1
1, i=1,2, ••• ,p.
Let Yo' Yl' ... , Yp be independent normal variates with zero means and
unit variances.
Then
2
for i - 1, 2, ••• , p and all a i
9
< 1.
i
I
J
Then it is easy to see that
I
I
I
I
I
I
I
~2)
where
(23 )
We note that gi (Yo) • gi (-Yo) and when 0i < 0, we can change the sign of xi
and get the same function gi (yo).
Hence, we consider 0i .::
o.
It is easy to
see that gi(y ), i=1,2, ••• ,p are monotonically increasing functions of
o
Then using lemma 2 in (22), we get theorem III when 10i I < 1.
0i I
S
Iy0 I.
Now when some
are equal to one, we can argue in the same way as done for the singular
case of theorem I.
Corollary 6.
--I
Let (~j' ••• , xpj )' j = 1, 2, ... , n be independent observa-
tions from a normal distribution with zero means and correlations Pi! I
for -1 ::: (Xi ~ 1, i .;. i
I.
•
0iOi I
Then if >.. j .:: 0 (J = 1, 2, ••• , n),; we have
n
2
Pre E Aj Xij .:: c i ; i=1,2, ••• ,p)
j=l
~
l
n
Pre E Aj~j .:: c ).
i
i=l
j=l
Proof is similar to that given in corollary 3.
Corollary
7.
Let
#j
= (~j' ••• , Xpj )' j=1,2, ••• ,nl and :jl = (Yl~ ••• ,ypj')'
jl = 1,2, ... ,n be n and n independent observations on ! and :, respectively,
2
l
2
which are independently distributed as multivariate normals with zero means.
for -1
~
0i ::: 1, then
p
i=1,2, ••• ,p)'::
n
n
1 2
n2
2
Pre E x
~ c
ij
i E Yij').
i=l
j=l
j(,.l
I
I
I
I
I
I
e
10
I
I
I
~
I
I
I
I
I
I
I
--I
I
I
I
I
I
I.
I
I
This follows with the help of corollary 5 and corollary 6.
The
following theorem will not be applied to the construction of confidence bounds, but
for the completeness and as an application of theorem III, it is given here.
Theorem IV.
/ ~ l3ii l3.1.1 = QiQ for
i.1
.1
1, and let § be distributed as Wishart (p, n, ~) and be independent
COV(!i' !.1)
-1
:s i :s
Q
Let (~, ••• , !r) be normal variates with zero means and
of (~, ... ,
= l3i.1
!r).
~
where~:
p X P is p.d. and l3
Then
1
r
i a 1,2, ... ,p)
(J)
2: n L{B(~J_n+~~_-p_))-lJ y
i=1
c
(l+y)
Proof.
2P- l
~(n+i)
As remarked in the proof of theorem II, we may consider
~
==
i
dy]
! and
where A.j ~ 0 are the characteristic roots of §; and (Y1J' ....' Yrj)' (.1==1,2, ... ,p) .
are independently distributed as normals with zero means and covariances
l3ij = QiQjJl3iil3jj for i ~ j and -1:S Qi
(26)
pr(!i§-l!i
2:
ci
~ii;
:s 1.
Then using corollary 6, we get
r
i=1,2, ... ,p)
2:
E[n Pr{c11~1~i~ Cil3iilJ.)l
i=1
where E stands for expectation over! and ~l' ••• , ~r are independent normals
with zero means and With covariances ~ii ~ (i = 1,2, ••• ,r) and independent
of § which is W(p, n, ~).
Denoting the right hand side of ~6) by A, we get
11
I
J
(21)
D
=
(
,
~
J:'i;{i
We note that if' Y
....
=
ci ; i
= 1,2, ... ,r
.I
and III = II
i=l
(
~r
2
r (n+r-i+1)/
2
r (n-i+1)}
2
1t
(1, ...y.r ), then
Let
'~..L
1
Zir =
p
}
(~+X1XpTJ:'r
and
~-X1(~r_1+IiX1)-~i = (~+X1Xp-1.
Then A can be rewritten as
•
I
I
I
I
I
I
I
--I
where
Since D,!, £Zl Z > c } implies Dt"l' we get
Co
rr- r
Co
.1:(r-tT-1)
.1:(n-tT)
1
A ~ 6
I !r-1+xiX1 2
(l+1i~1ir) 2
cl!r
1
SS
d~:
2
D D
1
This procedure is continued and we get (24) af'ter some simplif'ications.
Now
proving (25), we assmne p ~ 2 and then af'ter some simplifications, we can write
A as
I
I
I
I
I
.-I
12
I
I
~
I
I
I
I
I
I
I
Ie
I
I
I
I
I
I
I.
I
I
Now, we note that
00
00
J ym (l+y)-n dy > m(n_l)-l J ym-l(l+yfn +l
c
d¥ i f n - 1 > m,
c
and
With the help of these results in (30), it can be verified that
2
A':::
n [{B(~P' n+~-p) r
i=l
l
J....._ l
J y2J!
00
c
..!(n+2)
(l+y) 2
dy),
i
and this proves (25).
We give some passing remarks on the results of this section:
Remark
1.
Corollaries 5, 6 and 7 are used to obtain one-sided confidence
bounds on variances and ratios of variances.
Remark
2.
For any non-singular correlation matrix in theorem III, it has
been shown that Pr(x
i
i
I:
j, i, j = 1,2, ••• ,p.
~
c ; i=1,2, ... ,p) is locally minimum at P
= 0 for
ij
i
This gives some hope that theorem III may be true for
any covariance matrix.
Remark 3.
On account of the particular result in (25), we conjecture in place
of (24), the following result for any r,
13
I
J
m~~£~§·
Let x
ij
(j = 1, 2, ••• , n ; i = 1, 2, ••• , k) be independently distributed
i
as normals with means Il
i
and variances O"~ (i = 1, 2, ••• , k).
We obtain the
k
simultaneous confidence bounds on v j = E ajill , (j = 1,2, ••• ,p) in the simii
i=l
ni
lar form as that given by Banerjee [2,3] for p = 1. Let Xi = E xij/n ,
i
2
~
_ 2
k
_
j=l
si = E (Xij - Xi) and Yj = E a ji Xi' (j = 1, 2, ••• , p). Then
2 2 j =1
i=l
si/O"i' (i = 1, 2, ••• , k) and (Y , ••• , Yp ) are independently distributed
l
l-
and their respective distributions are
(d.f.), (i
with (n -1) degrees of freedom
i
= 1,
2, ••• , k), and multivariate normal with means (V , ••• , v )
l
k
P
and covariances O"jjl = i:lajiaj'i O"~/ni' j,j'
1,2, ••• , p. Let E stand for
=
Then by corollary 4, we get
the expectations over sl' s2' ••• , sk.
Pr[lyj _ vj
l2 ~ i~1a~i t~ s~/ni(ni-1)1
j
= 1,
2, ••• , p]
p
I
12
k 2
2 2
~E[n Pre yj_V j
::: E aj1 t i si/ni(ni-l) for fixed sl, ... ,sklJ.
j=l
j=l
where t , ... , t are some fixed quantities to be determined.
k
l
Now referring
to Banerjee [2,3], we can write
Pr[IYj_VjI2~ i;1a~i t~ s~/ni(ni-1)
~
where
~,
k
i:1W1 ,j Pr[lxjl
~
for fixed s1' ••• , Sk]
t 1 s i /O"iJni -l
for fixed siJ.
... , Xp are independent normals with zero means and unit variances,
and
14
I
I
I
I
I
I
I
et
I
I
I
I
I
I
.'I
I
I
~
I
I
I
I
I
I
I
Ie
I
I
I
I
I
I
I.
I
I
Moreover, after some adjustments, with the help of lemma 2 or corollary 3, it
is easy to show that for 1 ~ i j (j = 1,2, ••• ,p) ~ P (some i , ••• , i may be
. l
p
equal),
where, (j = 1, 2, ••• , p),
c
(34)
6 =
j
[B(~,~nj)]-
1
J
1
J
2
2
dt/(l+t)
nj
and c j = tj/..fiij - l
•
-c j
Using (32) and (33) in (Jl), it is easy to show that
Now, choosing t l , ••• , t k from
1
6j = (l-O)P-= pr[Fl,n -1
j
~ t~],
F
is a F-statistic with m
m,n
j-l,2, ••• ,p] ~ (1-0) •
Hence, simultaneous confidence bounds on
coefficient greater than or equal to (1-0) are
where the t~, s are to be determined from (l.6).
15
j (j=1,2, ••• ,p) with confidence
V
I
~L~ __QB_m~~~_2!_i_!Bg~~Bg~n~_Yn!~£!~~_n2~!_g!~~£!eY~!2n!_~~b_~~!
Y~ri~£~~.
Let xij (j
= 1,
2, ••• , ni ; i = 1, 2, •.• , k) be independently
distributed as normals with means JoL (i = 1, 2, ••• , k) and variance (l which
i
ni
/
2
k
ni (
_ 2
k_
_
is unknown. Let xi = E x ij n i , s = E E
Xij-Xi ) and Yj = E ajix
i
~=l
i=l j=l
i=l
(j = 1, 2, ••• , p). Then (Y , ••• , y. ) is distributed as normal with means
l
k
p
j = E ajiJ.Li (j = 1, 2, ••• , p), and is distributed independently of s2/rr2,
i=l
2
k
which is distributed as X. with E ni-k or (n-k) d.f. Now with the help of
i=l
corollary 4, we have
V
where
or
... dt
(40)
P
,
D being equal to {-c j ~ t j ~ Cj,j = 1, 2, ••• , pl.
= c = ••• = c and Cj~ = c(say) j = 1,2, ••• ,p, then the
2
p
l
tables for c for different values of a are given by Pillai and Ramachandran
When c
[12J and Dunn [7].
When the ~bles are not avaUl&ble, we ~ use the ine~ty
given in theorem II for the special case when p
= 1.
Hence (39) gives us simultaneous confidence bounds on
V
j
(j = 1,2, ••• ,p)
with confidence coefficient greater than or equal to (I-a) as
(41)
Yj = {CiS
J
k
2
E (aji/ni )} ~ v j ~ Yj + {CiS
i=l
k
2
E (aji/ni )
i=l
}
for j = 1,2, ... ,p, where c ' c ' ... , c p are to be determined from (10) along
l
2
with the remarks just below (40).
16
J
I
I
I
I
I
I
I
til
I
I
I
I
I
I
.-I
I
I
..I
I
I
I
I
I
I
Ie
~~2
__~YA_m2g~Jr_t:QI_gIQ~h_£~_:er2~~m§.
The MANOVA model as defined and
studied by Potthoff and Roy [13] and Khatri [10] is as follows:
Let
~:
p X n be a random matrix such that
F(Y) = B
~
and the columns of
~
SA
- rJ -
are independent normals with common covariance matrix
p Xp.
~:
q X m is an unlmown matrix of location parameters.
The matrices
~:
p X q
and~:
mX n are assumed to be known and
~:
cerned with the estimable l:i.near functions of
~,
Since we are only con-
we shall assume, without loss
of generality, the ranks of ~ and ~ to be m and q respectively, (see Khatri
[10]).
Hence, (13';e)-1 and (M,)-l exist and
We derive here simultaneous confidence bounds for the folloWing two types of
problems:
m
I
I
I
I
I
I
I.
I
I
Let ~= (~1' ••• , ~m) and:D j = E e ji ~i' j = 1,2, ... ,k where the
i=l
= (e ji ) : k 'X m is known. Here we are interested in deriving simul-
Problem 1:
matrix
E
taneous confidence bounds on !!'llj (j=1,2, ••• ,k) for all non-null vectors
~:
p X 1.
Here, we assume that
~
is unlmown.
q
Problem 2:
= (~1'
Let~'
the matrix ~ = (fji):
l
X q is known.
=
E fji!i (j = 1,2, ••• J ) where
i=l
Here, we are interested in deriving
••• , !q) and 2j
simultaneous confidence bounds on ~';ijj = 1,2, ••• ,f) for all non-null vectors
b: m X 1 when (1) V is unknown and (2) v . (j
-
jJ
-
= 1,2, •• • ,t)
are known.
This
problem is solved under the condition Vjj ' = ~jjVj'j' 0jOj' for -15 OJ 5 1,
j
~
j'
= 1,
2, ••• ,
l
where y is unknown and (v jj ,)
17
= Y· E(~'~)-~'~(~'~)-~'
I
Solution of Problem 1:
J
Let
and
Then, it is easy to show that § and
respective
distributions are
and normal with mean
(~jj')
=
X are independently distributed and their
(J1.' ... ,TIk )
=
D and
COV(~j'~j') = ~jj' ~l where
~ (M' )-1 E'. Then, by theorem II, we have
Pr[(Yj_~j)'S-l(Y._~j)/~jj< c j ; j = 1,2, ••• ,kJ > 1 - a
rJ ,.J
f'J
,., J f'J
-
(42)
where c
j
(j = 1,2, ••• ,k) are to be detennined from
(c.f. equation
("»
or
p
(44)
(l-a)
=[ n {
i=l
Hence (42) gives us the simultaneous confidence bounds on ~']j (j=1,2 •••• ,k)
with confidence greater than or equal to (l-a) as
1
(45)
!!'J:j _
{~j{/!!,§!.)}2:: ~']j ~
1
!!';¥j +
{~jjCj(!!,~)}2
for all non-null vectors a: p X 1 and c. (j = 1,2, ••• ,k) are to be detennined
rJ
J
18
I
I
I
I
I
I
I
til
I
I
I
I
I
I
.-I
I
I
~
I
I
I
I
I
I
I
Ie
I
I
I
I
I
I
I.
I
I
from (43) or t 44) •
One way to find c ' c ' ••• , c is to take c = c =
2
k
2
l
l
... = c k = C (say).
... -
Suppose y is unknown.
Solution of....... Problem 2:
~...-.....-~
."....
Let
and
where T:
N
t
Then it is easy to show that ~ and ~ are independently
X (n-m).
distributed and their respective
(~l'
••• ,
if i
= i' ;
~t)
and
distributions are normaJ.s with means
Q, and COV(~j'~jl) • Vjjl(AAI)-l and COv(t ji , t j1il )
otherwise zero, (j,jl
= 1,2, ••• ,t;
i,i' .1,2, ••• ,n-m).
= V.1jl
Then
using corollary 1, we have
(46)
k
J~1 pr[(~j-~j)1 (~l )(ij-~j) ~
Cj S 1 ,jj] provided Vjjl ... JVjjV jl jl ..
-,------ -- ~ aja , with-l ~ a ~ 1.
j
j
.
.Hence (46) gives us simultaneous confidence bounds on ~:~J (.1 = 1,2, ... ,()
with confidence coefficient greater than or equal to (I-a) as
(41)
121~j
1
- {c s 1 ,jj; 12 1(M 1 )-1 gj2 =s
.1
for all non-null vector
(48)
I
~:
1
gl~j ~ 121~j
+tejsl,JjEICM,)-\)2
m X 1 and c l ' ••• , Ct are to be determined from
1
1
l-a. 1I [£B(~, 2(n-m)
.1=1
r 1 JC j y~-l (l+y)
+
dy] •
o
One way to find c ' ... , Ct is to take c 1 = c 2 = .. ,.
1
19
Ct = C
(say).
I
Now, suppose that V
(j ==1,2, ... ,l) are known.
jj
J
Then using corollary
5, we get
pr[(~j-~j)I(~I)(~j_~j)~ CjV jj ; j=1,2, ••• ,[J ~ (l-a) =
p
J[ Pr[(Zj_~j)1(M ' )(Zj-S?j) 4f
CjVjjJ.
j=l
Hence (4q) gives us simultaneous confidence bounds on
g I 9? j
( j =1,2, ... , I)
with confidence coefficient greater than or equal to (l-a) as
1
b'(AA,)-~)2
'h1z - (c v
~ -j
j jjfor all non-null vectors
g:
--
-
<'hltp -<:b'z + (c v
- ~ ~j - - -j
j jj-
m'><lj c ' ... ,
l
Ct
~
1l' [[2m,,~))
j=l
-1
1
l
J j y2M1
exp(2'Y)dy].
o
2.,.~__ §2m~_§mSi,~J,_12J:2gJ,~m§·
Ct
-
Here let us first sUpPOse that
.r.n
xij
In and s 2 =
== c (say).
Let ~1' ••• , ~n be independent observations on ~
which is distributed as multivariate normal with mean
Let xi =
-
are to be determined from
One way to find c l ' ••• , c[ is to take c = c 2 ' ••• ,
1
~: p>< p.
--
C
I
(l-a) ==
1
b'(AA')bf
~
P
CT
ii
.
r. s'11 /
P, w~th s ii
=
=
2
CT ,
n (
r.
~
and covariance matrix
(i == 1,2, •• ,p) is unknown.
-
x ij - xi
)2
•
Then
(-
- )
~, ... ,x
J=l
i=l
j=l
P
is distributed as normal with mean vector (J.L , ... ,J.L ) and is distributed indepenp
1
dently of s2.
Using arguments similar to those of Dunn [6, pp. 1102-3], we get
But for j ,. j I ,
I
I
I
I
I
I
I
til
I
I
I
I
I
I
.-I
20
I
I
~
I
I
I
I
I
I
I
Ie
I
I
I
I
I
I
Hence,we get
using corollary 4 for fixed
NO,\,T,
sl1'
we Get
where
(55)
D being equal to £-c
When
c i In-l
values of a
< t j < c. ; j=1,2, ••• ,p} •
j -
=c
-
J
for
i=1,2, ... ,p, then tables of
are available (see references [7] and [12]).
c for different
When the tables
are insufficient, we may use. the inequality derived in theorem II.
we get
coefficient at least
x.J. where
From
c
l
(1- a) as
<x.J.
(cis!-fn) -<~.J . -
, ••• , c
p
+ (cislJn),
for i=1,2, ... ,p
are to be determined from
(55).
1
We can improve the result (56) if we know (tr
2>2
q
the correlation matrix of Y'1.' ••• ; xp , q
is the rank of
mea.ns sum of the principal uinors of order
q
in
2..
vThere
2
is
£ and tr q 2
This is done with
tIle 1.1elp of the following result {5.l5, p.566 when k=O and p= (l-j.1)a ) of
g
Ruben [20]:
Let
y l' Y2' • • • , Yq
and unit variances, and
(57)
(54),
simultaneous confidence bounds on J1 (i-1,2, ... ,p) ,nth confidence
i
q
I.
I
I
sll and then integrating over
Pr[
~
1=1
be independent normal variates 'trl. th zero means
~
> 0 (i=1,2, ••• ,q). Then,
I
If
~,
\ ' \ , ••• , Aq are the nonzero ell. roots of
then
tr
J
=
E
q-
and (51) is equivalent to
P
(58)
'1 :s c{tr q ~
Pr[ E
i=l
q
1
rc:"
~ -+ N(Q.,~)]:S Pre E y~:s c).
lThen
i=l
Hence, it is easy to see tllat
P
where
v .. • s
"1J
are independent normals with zero means and 1mit variances.
NOvT, when ~, ... Xp
(60)
n-l q
[E E J~.. < cp],
j=l i=l 1 J -
1
:s c {tr q e)"2 02] :s Pr
Pr [ s2 = E s. i Ip
i=l 1
Pr [s2
are fixed, then from (59),
: : '-nI c.2
-) I!.
> max
( (xi
i
1
1
-R
0-)
-
n-l
{ tr
0
get
't're
)1/ q fr'"' I -xl' •••• -x ]
qN
P
.
q
~ Pr [E
E ~.,
i=l·J
j=l
>
P
max
i
{(i. 1
1
I L, ... ,x J.
02)
1!.)2n/ c;;
P
i
1
--I
Now (60) is equiva1ent to
>
Pr [Ii.-~.I < c. X
1
1
-
1
(j
/.rnp ;
n-l
q
= E
E
is a
j
j=l i=l
coro~J (4) in (61), we Get
where
r
r
Yi
i=1,2, ... ,p]
variate with {n-l)q d.f.
Now, using
"There
(65)
(l-O) =
r ( 9. ~-l
(
)
011>
){
1
r
1(2 P
({
q
~-l
»lrlJ
'J
D
D
being equal to
(-c./.[P < t. < c.!..{p
1
-
1 -
1
for
p
(1+i ~lt i )
-
i=1,2, ••• ,p)
~(q(n-l)f'p)
dt
r ... dtp '
•
-jnt.
When
c/'[p = c/Jq(n-l)
COI:lJ?a.re (63) with (55).
(i=1,2, ••• ,p), then tables"are available.
Hence (62) gives us simultaneous confidence bounds
22
I
I
I
I
I
I
I
I
I
I
I
I
.-I
I
I
I'-
on 1-1.~ (i=1,2, ... ,p) with confidence coefficient greater t:ilM or equal to
(14)
as
(64)
I
I
I
I
I
I
Ie
I
I
I
I
I
I
I.
I
I
for
:Jc
i
- (c
i
i=1,2, ••• ,p
s/.Jn{tr q 2 )l/r:.. }
where
~
cl, ••• ,cp
Il
i
~
xi + (cis/,m (trqvl/q)
are given by (61).
In the concluding, i're remarl: that siImlltaneous confidence bounds
obtained ill section 3 will be shorter than the traditional ones when the
number of linear functions is not too large and in some cases it may nearly be
the s:ilortest.
AJ.l the results of section 3 can easily be extended to regression
- like parameters (in testing independence of two sets), to testing the Imllticollinearity of means (or to covariance analysis), and to the step down procedures, but they are not given here.
I am thankful to Dr. D. Basu for some suggestions during the preparation of this paper and to Professor N. L. Johnson for goine; tlJrough the manuscript.
~:-
for
cOl~lex
The results of sections 2 and 3 can be
Sh~lIl
to be true
multivariate Gaussian distributions with necessary modifications.
These results will be presented in a later report.
23
I
REFERENCES
[1] ANDERSON, T. W. (1955).
The integral of a symmetric unimodal function
over a symmetric convex set and some probability inequalities.
Froc. Amer. Math. Soc., 6, 170-176 •
[2]
BANERJE,E, Saibel Ku.mar(J.96o). Approximate confidence interval for
linear functions of means of k pOPUlations i.fllen the variances
are unequal. Sa.n::~l;ra, ~ 357-358.
[3]
~~JEE,
[4]
CRAl'lER, H. (1946).
Saibel Kumar (1961). On the confidence interval for two-means
problem based on separate estimates of variances and tabulated values
of t-table. Sa.n.l:~l:'''a., ~ (Series A), 359-378.
/
Mathema.tical methods of Statistics.
Uluversity Press, Princeton. p. 176.
Princeton
[5] DAVJJ), H. A. (1956).
On t'ne a.pplication to statistics of an elementar:r
theorem in probability. Biometrika, U. 85-91.
[6] DUNU, Olive Jean (1958).
Ann. Math. Statist.
Estimation of the means of dependent variables.
~
1095-llll.
DU1m, Olive Jean (1959). Confidence interva.1-S' fur the means of depend<lilt normally distributed variables. S.A. S.A,
613-621.
a,
[8]
DIDIDT, Olive Jean (lg6l).
~
[9]
Multiple comparisons among means.
J.A.S.A. ,
52-64.
nU1rnET, C. W. and SOBEL, H. (1955). Approximations to the probabilitJf
integral and certain percentage points of a multivariate analogue
of students t-distribution. Biometrika, ~ 258-260.
[10]
KHATRI, C.G. (1964). On a MANOVA model applied to problems in growth
curve. Institute of Statistics, Mimeograph Series No. 399,
Chapel Hill.
[11]
r-TAIR, K. R. (1g48). The distribution of the extreme deviate from the
sample mean and its studentized form. Biometrika,.~ 118-144.
[12]
PILIAI, K.C.S. and RAMACHANDRAN, Ie. V. (1954). Distribution of a
studentized order statistic. Ann. Math. Statist., ~ 565-572.
[13]
POTTHQF.F, R. F. and ROY, S.N., (1964). A generalized HANOVA model
useful especially for growth curve problems. Bionetril{a, ~
(to be published).
Some problems involving linear h~':Potlleses in multivariate analysis. Biometrika, ~ 49-58.
[14] RAO, C.R., (1959).
[15] ROY, S.N. and BOSE, R.C. (1953). Simultaneous confidence interval
estimation.
Ann. l~th. Statist. ~ 513-536.
24
J
I
I
I
I
I
I
I
--I
I
I
I
I
I
.-I
I
I
~
I
I
I
I
I
I
I
Ie
I
I
I
I
I
I
I.
I
I
[16] ROY, S.N. and GNANADESIKAN, R. (1957).
to multivariate c:mf'idence bounds.
[17] ROY, S.N. and GNANADESIKAN, R. (1958).
A note on 'Further contributions
Biometrika, ~ 399-410.
"A note on 'Further contributions
44 (1957), pp. 399-410".
to multivariate coni'idence bounds' B10metri1~a,
Biometrika,
!!l.
1'.531.
[18] ROY, S.N. (1961). Some rema.rlts on normaJ. multivariate analysis of
variance (1). I.e Plan n'expe'riences, ,0... W" 29 Ao{h, 1'1'. 99-113.
[19] ROY, S.N. (1957).
Some aspects of multivariate an!!ysis.
and Sons, New Yorl:.
John Wiley
[20]
RUBEN, Harold (1962). Probability contents of regions under spherical
normal distributions IV: The distribution of homoGeneous and
nonhomogeneous quadratic functions of normaJ.. variables.
Ann. Math. Statist. ~ 542-570.
[21]
SCHEFFE, Henry (1953). A method of' judging all contrasts in the
analysis of variance. Biometrika, ~ 87-104.
[22]
SIOTANI, 14. (1959). The extreme value of the generalized distances of
the individual points in the multivariate normal sample. Ann.
Inat. Stat. Math. , ;bQ" 183-203.
-
[23]
SIOTANI, 14. (1959).
Stat. Math.
/
2"
On the range in the multivariate case.
155-165 (In Japanese).
Proc. Inst.
-
[24]
SIOTANI, 14. (1960). Notes on multivariate confidence bounds.
Inst. Stat. Math. , y:" 167-182.
[25]
TUI<EY, John M. (1953). The problem of multiple comparisons.
Mimeographed notes, Princeton Un!versitZ.
25
Ann.
© Copyright 2026 Paperzz