Roy, S.N.; (1964)A report on some results in simultaneous or joint linear (point) estimatio."

A REPORT ON SOME RESULTS IN SJMULTANEOUS
OR JOINT t,INEAR (POINT) ESTIMATION
S. X.
Institute of Statistics
Mimeograph Series No. 394
June, 1964
ROY
_._
--II'
UNIVERSITY OF NORTH CAROLINA
Department of Statistics
Chapel Hill, N. C.
A REPORT ON SOME RESULTS IN SlMULTANEOUS
OR JOINT LINEAR (POINT) ESTIMATION
by
S. N. Roy
June 1964
This research was supported by the National Science
Foundation Grant No. GP-l660
Institute of Statistics
Mimeo Series No. 394
A REPORT ON SOME RESULTS IN SIMULTANEOUS
OR JOINT LINEAR (POINT) ESTIMATIO~
by
S. N. Roy
University of North Carolina, Chapel Hill) N. C.
= = - - - - -- -- - -- -- -- -- - - - - -- - - -- - - - - = = = = = = = = = = = = = =
1)
Summary.
The unbiased minimum variance linear estimate of an estimable
linear function of parameters under a linear model has been long well known
and has long figured prominently in statistical literature.
The main results
here have been expressed in different forms, among which is the one used by
the author and his collaborators [2, 3].
The problem may be called one of
single linear estimation for the univariate case; univariate in the physical
sense because all experimental units are measured on one variate,
the mathematical sense because we have only one unknown
~d
v~riancefto
also in
deal with.
As a follow-up on this problem three other developments suggest themselves to
us, namely (1) the problem of estimating with linear functions of the observations several linear functions of the parameters under a linear model, in
the univariate case, (2) the problem of a single linear estimation in the
multivariate case where each experimental unit is measured on p variates
and we have an unknown pxp
dispersion matrix to deal 'nth) and (3) the
problem of simultaneous linear estimation under the model of problem (2).
In this paper by way of a necessary introduction to the following
sections) section 1 reproduces in the form given in [2, 3], the main results
of a single linear estimation for the univariate case.
Section 2 discusses
the problem of simultaneous linear estimation for the univariate case, section
3 that of single linear estimation for the multivariate case and section 4 that
~
simultaneous linear estimation for the multivariate case.
The results offered
~his research was supported by the National Science Foundation Grant No.
GP-1660.
2
in sections 2, 3 and 4 have been obtained in the course of frequent discussion between the author and some of his collaborators in this particular
area and, with the exception of one particular result, have not been published
so far by our group and, as far as we are aware, by anybody else either.
Of
this particular result a proof is given here different from the one given in
an earlier mimeographed paper [2].
Under the different models of linear esti-
mation introduced in the following sections, linear functions of parameters are
estimated with linear functions of the observations and this is what is
traditionally called linear estimation.
However, there are other principles
or methods of estimation of the same linear functions including tile important
and interesting method of least squares, leading, in some situations) to the
same estimates as in linear estimation, and, in some situations, to different
estimates.
Except in section 1 where an interesting tie-up with least-squares
solution is just mentioned, the method of least-squares does not figure elsewhere in this paper.
We shall discuss in a later communication such connection
as we have noticed between the two approaches, for the estimates of linear
parametric functions under the models of sections 2, 3 and 4.
1.
Single linear estimation for the univariate case.
The model ~'
= (Xl' x2,
(i
"Ti th common variance
... ,
x )
n
is a set of uncorrelated random variates
and expectation given by
(~:)
r
S = n[~I
~D ]
m-r
r
m-r
say, ''There A is a given nxm matrix, to be called the model matrix, which
(1.1)
E(x)
'"
=
A
'"
rJ
N
depends partly upon the design of the experiments and partly upon What experi-
S is a set of m unknown
'"
x. 's, of course, are measurements of a single character on
mental statisticians call the "model", and
parameters.
The
J.
3
n
experimental units.
m < n;
let rank
I'lithout any loss of generality we assume that
f, = r
< m < n and let
~I
be a basis and
~
the so-called
set of dependent column vectors.
~'~
The problem is to estimate
is unknown, while
with
~',3,
is observed but the
x
where the
l:xn
IV
The criteria used to determine
(1. 2)
(i)
E(b' x) = c'
tV
N
E::
tV
N
b
tV
b'
,....
lxn c'
is given but
is to be determined.
are that
= [c'
c'J
,.vI : ND
(51)
g
, say, (for all ,..,g), and
",,2
(a)
(i)
V(b' x)
tV
is to be a minimum.
tV
is called the unbiasedness condition and only partially determines
b'
,-v
(ii) is called the minimum variance condition, and, together vdth (i), completely determines 6.
(i) also imposes a restriction on
the estimability condition.
c ,
rJ
usually called
Linear estimation thus involves (i) a linear model,
S to be estimated, and (iii) a linear estimator.
(ii) a linear function of
The main results of a single linear estimation for the univariate case are
summarized below from
(1.3)-(1.8).
The estimability condition is that
(1. 3)
rank A
=
or equivalently
(1. 3.1)
The unbiased minimum variance of an estimable
c'
IV
S is
,...
(1. 4)
which is often called the best unbiased linear estimate or BLUE estimate.
The variance of this estimate is
(1. 5)
2
(j
4
An unbiassed estimate of
~2 is
(1. 6)
(1.7) All these results are invariant under the choice of a basis
for the
it .
matrix
Observe that the choice of a basis
a partitioning of g
(1.8)
~I
~I
induces a partitioning of ", and
ct.
The least squares estimate of an estimable linear function
~
induces a partitioning of
c'g
is the
same as the BLUE estimate given by (1.4).
1.1 Some remarks on the derivation of the results from (1.3)-(1.5).
Although the above results are derived elsewhere [2], it is worthwhile to
sketch briefly the
deriv~tion
of the reSUlts
(1.3)-(1.5) in as much as the
tools involved are the same as will be used in sections 2, 3 and 4.
For the
derivation the two following mathematical lemmas are needed, in particular.
Lemma I.
There exists the factorizations
=
n
where
is
I
l
is a non-singular triangular matrix (say lower tria.ngula~ and ~
nxr and orthonormal, i.e.,
L LI
rv N
= I(r). All that is needed is just this
r.J
existence and not the expression for the right side matrices.
(1. 9) will imply that
AI
I
=
or
5
~
Lemma II.
could be completed by an (n-r)xn
JI
into an orthogonal
matrix, so that
t }~.
r
(1.10)
n-r
1'1
=
r
(~'
Hi)
L L'
'"
L' ) n
",,1
n-r
L' )
,..;1
~
=
c<r l
=
~l~i
(~)
=
L'L + L'L
tv
N
",,1"'1
,g
;<u_rJ
.
Applying (1. 9) to the expectation model (1.1) and the unbiasaedness requirement
(1.2) (i)} we have
(1.11)
b' L'
t'J
"oJ
and
(1.12)
'
C
ND
c' (T,-l T')
/VI ",1 . . ., 2
-
-
c'(A!.. A )-1 ......I
At A-,JI;;.:;r (VI
IV])
-
•
The condition (1.12) is the same as (1.3.1) which is equivalent to (1.3).
condition (l.ll)} as observed earlier, only partially determines
b.
N
The
For the
minimum variance estimate we proceed as follows •
+
and the extreme side of the equations is minimized by putting
(1.14 )
b'
tv
L'
nil
=
0
tV
and this together with (1.11) and (1.9.l),gives
(1.15)
b'
= cIt Tl'-~ =
N
,J
",,.,J
whence follows (1.4).
enroute.
eI' 'Tl,-lT
"
-v l
~
l
L=
AJ
c~(A!..
;;.;r ....AI)-l .",AI'
1"'.1.
It is easy to check that (1.5) has been already obtained
6
1.2 Comparison between two different designs, in terms of a single linear
estimation.
For simplicity of discussion, let us consider a connected block-treatment
type of design, i.e., a two-way classification ,dth blocks and treatments.
n will be the same in both, the
cl.
A
matrix and the
N
The
will usually be different and so also the
~
The part of ,...
S that relate to the treatment contri-
butions will stay the same and the linear function to be estina ted ,·Till involve
only this part of -v
S
CI
(A I
.....I ",I
~I )
-1
2
£I
designs, rank
O"i
That design will be the better one for which
= 1,2)
(i
It may be observed that, for connected
is less •
= b+v-l, and hence we have the same
for both, and also
~I
that, since we are considering only the treatment part of
of
~
S , the block part
rJ
~I'
vnll be absent, and hence also the corresponding part of
in addition, that if both designs are equal block side designs, then
be less for the design whose block side
1.3 The quasi-multivariate model [2].
positive definite dispersion matrix
O"~ will
~
is less.
k
vfuen x has the symmetric
E
We note,
pxp
which is supposed to be known except
tV
for a scalar multiplier, the expectation set-up is the same as (1.1), and,
under this general model, the linear estimation problem is the same as before.
To handle this we use the matrix factorization
lower triangular non-singular matrix, put
-1
l'
E
.-.J
x
= 1"
1"',
..... ,.J
where'1'
~
is a
= y, note that y has the
~rJt'V
N
dispersion matrix tVI, which practically throws back the problem on the previous
case, work in terms of
y
and then transform back to
N
x.
~
this estimability condition (1.3.1) now becomes
(1.16)
c = c'(AI
'\ID'
""I ....I
=
=
E- l A-)
N
.v-I
-1
A~ E -1 A
;;.oJ:
IV
c' T,-l(L E- l L,)-l'T- l
",ltVl
""N
'-:I !i-l J2
'"
....
IV
D
T ( LE- 1
1,.,1
L') T'
tV",,,,,
= .eI(~i ~I)-l ~i ~
",,2
The second form of
7
~
Thus the estimability condition stays the same before, which can also be
1\
checked more directly.
The unbiassed minimum variance estimate now becomes
(1.17)
and the variance of this estimate becomes
2.
-1
I
(1.18)
~i ( ~I ~ ~I
)-1
~I
Simultaneous linear estimation for the univariate case.
With the univariate model of the previous section the problem now is one
of estimating the vector
c~
given
s < rand B is to be determined.
sxm matrix of
raw~
with the vector
""
B S , where
rJ,..,
C is a
~
As to the
total criterion to be used, we hold off for a moment, while we observe that a
part of the total criterion could well be unbiassedness, as before, leading to
the estimability condition
=
raw;: ( AC)n
s
m
or equivalently,
ran1~
A
At the back of this we have, as before, from unbiassedness, a condition
similar to (1.11), being now given by
B L'
rv
=
N
Let us now denote by
,..., -1
~I ~i
c!J.
N
B, pick up from (1.4) the
and ,.,b!J. (i
= 1,2, ••• , s) the row vectors of rvC and
BLUE for each
tV
properties of the following estimator for
(2.4)
N
C (AI' AI)-l ~
"-'1
I roJ roJ
x
"oj
c.I -;,....
(VJ. ;...
and set up and study the
8
which we shall just call, for shortness, the BLUE estimator.
Its dispersion
matrix is easily seen to be given by
2
(2.5)
(j
M (say)
tvO
B*
x, satisfying the unbiassedness condition (2.3), the
,.., tY
For any competitor
dispersion matrix will be given by
ri B* B*'
(2.6)
~
=
,."
B*[L' L + L' L ] B*'
(j2
-v
tv
tV'"
1 ,.., 1
-v
=
We observe that M
is an
NO
an
sxs
sxs
symmetric positive definite and M* is
~
symmetric and at least positive semi-definite matrix of the same
rank as
This implies, in terms of characteristic roots, that
B* L'.
.v
""'1
> the corresponding ch
any ch [M + M*]
rJO
rJ
-
for at least one root, unless
we are back in the BLUE
(2.4).
M*
N
is zero, i. e.,
M, the inequality holding
rvO
B* -L' l'
rv
= 0,
N
in Which case
As a consequence, among other results we have,
in particular, the following
(2.8)
(i)
ch
max
(ii)
tr
M
",,0
M < chmax [M
+ M*]
,,-,0
tv
N 0
< tr [M
",0
+ M*]
tv
and
(iii)
I tvO
M I
<
1M
A.P
+M*I
tv
In fact, a strict inequality like (ii) or (iii) would hold for all the
symmetric functions of the roots (in the sense of theory of equations) and,
in general, for any function of the roots wmicb is such that it decreases if
any root ±s decreased without decreasing the other roots.
fined by
(2.4),
Thus, the BLUE de-
is a unique solution in terms of minimization of any such
function of the roots, including the trace and the determinant, but is not a
unique solution (though no doubt a solution) in terms of minimizing the root of
9
any order, including the largest root.
2.1 A physical interpretation of the largest root, the trace and the determinant minimization criterion.
If we attach, on economic or utilitation con-
siderations, different economic weights (a , a , •.• , as) =~' normalized
l
2
by a' a = 1 to the components of C S and thus to those of any estimator
rv
IV " "
rJ
a' C S estimated by a' B S
B x, we come out with a single linear function
tv N
the variance of the estimator being given by
""
,.., f-./
'"
tv
~
,
~2(a'
B B' rJa).
tv
('J.v
(i)
The normalized weight system ,v
a for which
(a' B B' a) is a maximum
fV~-V-'
(= ch [n B']), is thus the most unfavorable weight system, producing the
ma~'"
largest variance for the estimator.
minimize
Choosing B, subject to (2.3), so as to
r.J
ch [B B'], which is the largest root minimization criterion, is,
m~N
therefore, a minimax criterion, in this sense.
(ii)
For the trace criterion, we observe first that
B
B' = ,...",f .......,y
D f{ where
tv"""
IV
r is an orthogonal matrix and y's are the roots (all positive) of B B'.
~
IV
IV
t'I
Consider now
J=~'[11 ~,] ~ d(~) = J
ala
tV ,..,
a'a=l
,..,-
1
J[~*:PY ~*]d(alJ?
=
=
a*'a*=l
,v
a' [f
Dr'] a d( .a...)
"" ""'\{,.."
""
,...,
s Y.
E
'1
J.=
J.
J
B']
a*.2 d(a*) = const. tr [B,...,-v
a*'a*=l
tv
'"
,J.,..,
I'W
Thus the trace is proportional to the average variance, the averaging being
over this particular measure on
a, and hence the trace minimization (over B,
rv
&V
subject to (2.3», is minilnizing this average variance.
(iii)
If the
x of section 1
~
-
is, in addition N[E(x), ~ I(n)], then any
~
unbiassed vector estimator I>JrJ
B x of C
s
,.",,v
is
N[C E, ~2 B B'], and in this case,
""',J~""
it is well known that IBIV B'
I , the determinant of the dispersion matrix is
,...
10
proportional to just the square of the volume of the ellipsoid given by
[B x - rtrI,...,
C;Jt ""
B -."
B,]-l [B
x - C "'s]
,...,,..,,,...,
"
f'J ""V
independent of B and
C.
a probability content
Thus minimizing the determinant merely means
IV
f"'W
= constant, ~ich has
minimizing the volume for this given probability content.
However, if x
r--J
is
not mUltinormal, this kind of interpretation gets a lot fuzzier.
(iv)
vector
We also have incidentally the important result that the BLUE estimator
(2.4)
at
C t an estimator
tv I V " , '
produces, for any single linear function
which could not be improved by any other estimator vector.
2.2
Comparison between two different designs, in terms of simultaneous linear
estimation [lJ.
As in section 1, for each design we take the corresponding
BLUE estimator given by
" l '~n rr2
t ~a
l
an d
SI(~ii ~Ii)-l 2i
CT2 '
2
J
(i
(2.4).
Then, as in section 1, aside from the differen-
the comparJ.son
. .~s now be t ween the matr~ces
·
= 1,2),
and the two designs might and, in general, would
compare differently according as we use the largest root or trace or determinant
criterion on the matrix given above.
3.
Single linear estimation for the multivariate case ~
The model.
We have here
pxn matrix X consisting of
rJ
dimensional column vectors with a common but unknown
There are experimental units and on each
If we rollout
~
X into a
N
p
pxp dispersion matrix
(lxpn) row vector [x t , x t , ••. , Xl]
""p
l ~2
= x*I
(say), then
tV
(~(n~,E], in the notation of Kronecker product.
For the expectation, let us first consider the following model which is less
than the most general one (and x which is based on a common man model
'"
matrix A for the different variates and an mxp matrix of parameters rJS):
(3.1)
E(Xt)
N
= AS.
IV
N
E.
tv
different variates are measured.
(V
has a dispersion matrix
n uncorrelated p-
x
11
S~ into a vector [;1', t 2', ••• ,~']
= "";*' (say),
",p
If we now rollout
N
N
then (3.1) can be rewritten as
(3.2)
= [A
E(x*)
.-.-
tV
®
I(p)]
rJ
s*
IV
Suppose now that (3.2) we want the BLUE estimate of a linear function
c*' s*
t'V
-v
p
E
=
c! c. (say).
i=l ..... ~ "'1
From section 1.3 the estimability condition is
now
rank
(3.3)
or
rank
(~
,!(p) )
(X)
=
eJ
which means that each
rank
=
A , (i
IV
c!
'V~
E., (i
'"
1
[~±l' ~I2' ••• , ~IP]
We denote
rank
,..,.c*'
[~
@ -!
= 1, 2, ••• ,
(p)]
p)
= 1,2, ••• ,p), should be separately estimable.
by ~*i'
The BLUE estimate, using section 1.3,
will be given by
(3.4)
[(~I0!(p»'(t(n) @.Erl(~I @~(p)]-1~I0!,(n)]'[,!(n)@Jrlt*
,s*±
= ~*i[(~i0 ~-l)(~I@ ,!(p)rl[~±@!,<p)][;s(n)
=
c*' (A' A_fX" Z-lrl[A' @ E-l]x*
I tvI ;..,;~~
-vI
,..; ~
-v
IV
p
=
Q E- l ] ~*
. E1
1=
N
cI'i (A~
AI)-l AI' ,yx.1
tv.&.
tV
IV
which is the sum of the BLUE estimates for the individual
c! ~.'s (i
",,1 "'1
The variance of this estimate, using again section 1.3, is given by
= 1,2, .•. ,p}
l2
~*i [(~I @~(p»I(!.(n) @,E)-l(bI @
= ~*i[(M @ ,E-l)(~I
'V
nJ
P
E
=
i,j=l
,E
where
® ~(p)rl ,.s!
E-lr l ",I
c* =
c*I[(A'
rx:-.
I ,..1 A
",I ) \::;)
=
!(p}}-l_<7t
-1
c'
A' A_
c
N'Ii (",I f>iI ) Nlj
a..
~J
ij ] .
= [a
For an unbiassed estimate of the dispersion matrix we have, as an analogue of
(1.6), the expression
X[I(n) - tvAr(A
' AI)-l ..vI
~] tVXI/(n-r) •
tV I N
(3.6)
('V
'"
In the development of
(3.4) - (3.6) we have tacitly assumed that E is a
'V
symmetric positive definite matrix.
If it were symmetric positive semi-
definite (as in the study of certain nonparametric problems), then, while
(3.4)
is a BLUE estimate, it is not unique, but there could be others, depending now
on the "orthogonal completion" of
E.
tV
This will be discussed in a later paper.
E is
Let us now go back to the case we were considering, namely when
'" E
symmetric positive definite.
The most significant fact is the unknown
drops
I'J
out of the BLUE estimate.
the form
A
Ii' I(p),
.v I..::;) N
This is possible only because the model matrix is of
r(n) 9
E .
\.!Y IV
for the different variates, or
\'lhile the dispersion matrix is of the form
This would not happen if we had different
A. IS
N~
""'
in other words, a pseudo-diagonal model matrix with diagonal submatrices of the
form A.
"'~
(i
= l,2, ••• ,p).
However, the estimability condition would still be
the simple form
rank
hi
=
rank
IA. \
~~
l )
(i
= l,2, ••. ,p)
13
For some time now we have, in fact, been "'orking in terms of what might
be called multivariate designs consisting of
where even n.
and m.
~
~
(n.~ x m.)
A. (i
~,.v~
= 1,2, ..• ,p)
might be different for the different variates.
The
possibility of linear estimation under such models will be discussed in subseand/or for special
For special patterns of the different iN
A's
quent papers.
1: , something useful (from which the unknown
patterns of
f'OoI
E
tV
drops out) is
still available.
Going back to the univariate model matrix A we have been considering so
r..I
far, we now observe that because of (3.5) the comparison between two different
designs
would now be more complicated and uncertain than in sections 1 and 2.
Whatever we have been able to say meaningfully about comparison of designs in
this set up will be discussed later.
3.1 The same problem under a some",hat more general model.
Hhile staying with
the same A for the different variates and the same problem of a single linear
N
estimation of
c*' s*, let us change the expectation model into
N
"V
=
E(X')
tV
where
A
s
G
(VNtv
s
rank G = q < p .
""
One way (though t,y no means a unique way) to go at the problem of a single
X'
rJ
is
A
is
tV
nxp,
man,
G is
is mxq,
qxp,
n;
-
linear estimation in this set up is to transform (3.7) to
(3.8)
(say)
where
X' G'(G G,)-l =
"J
("<wI
Note that V(Y')
N
X' G*' (say) = Y'
1"tJf'V
=:
"....,
(G G,)-l G
"'ot#'\J
,
,..",
.....,
r: G'(G G,)-l = G*1:G*'
•
,--""r\;
I""tJnJ"';"'-"""
Next, we seek, as before, to estimate
c*'
~*
ru""'w·
with b*'
y* where
~""",,'
is rolled out as a vector from Y in the same way as
y*
rv
is rolled out as a
,-v
vector from X.
tV
The estimability condition will stay the same as before, while
the BLUE estimate in terms of y* or y.'s
tV
(3.4 ).
This, transformed to
,..... ~
will come out in the same form as
x. 's, will come out as
('J~
,
.
14
q
1:
c '. (A ' A ) -1 [G* X].
r ~ tvr '"r
"" '" ~
i=l
where
f'V
[G* X]. denotes the i-th row of the matrix.
tV
~
'"
Likewise the variance of
this estimate comes out as
q
(3.10)
E
c'r·(A'
A
~,vr....r
)-1
'"cr'J
. . 1""
~,J=
where
=
=
[O!j]
G* E
tv
G*'
('oJ
'"
Even within the set up of a linear estimation there are other ways of
going at the problem (in the sense of both formulation and execution) which will
be discussed later.
4.
Simultaneous linear estimation for the multivariate case:
model as in section 3 the problem now is to estimate
where
C*
tv
is
[Cl'
C ' ""
tv
-v 2
condition
(4.1)
s x mp and B* is
s x np.
~
C* 1;* with I'V
B* x*
r\J
~
Let us write
t\",t#
c*
I!'J
as
combining (2.1) with (3,3), we have now the estimability
C],
IVP
rank
Under the same
(~i)
=
(i
rank
= 1,2, ••• ,p) .
The BLUE estimate vector (from Which, as in the BLUE estimate scalar of
(3.4), the unknown dispersion matrix
to
E 'vill drop out) is in a form analogous
IV
(3.4) and is given by
P
(4.2)
E
. 1
~=
Cr' (A r' Ar)-l Ar'
~ '" '"
'"
fV
x.
,..,~
and the dispersion matrix of this estimate, in a form analogous to
given by the
(4.3)
sxs
matrix
P
1:
. . 1
~,J=
C~. (A-vr' Ar)-l Cr' J.
"".L.~
tv
'V
O"ij •
(3.5), is
·.
15
As to comparison of two different designs in terms of this dispersion
matrix of the BLUE estimate vector, we can go back to the considerations of
section 2 and use the largest root or the trace or the determinant criterion
on this matrix for the different designs and then notice the same kind of difficulty that we commented upon "Thile discussing the problem of similar comparison
at the end of section 3.
REFERENCES
[1]
Krishniah, P. R., "Simultaneous tests and the efficiency of generalized
balanced incomplete block designs", Doctorial thesis, 1963, University of
Minnesota, Minneapolis, Minn.
[2]
Roy, S. N., Some Aspects of M.lltivariate Analysis, Chapter 12, John Hiley
and Sons, 1957.
[3]
Roy, S. N. and Gnanadesikan, R., "Some contributions to ANOVA in one or
more dimensions II , Ann. Math. Stat., vol. 30, pp. 304-317, 1959.
[4]
Srivastava, J. N., IIA note on the best linear unbiassed estimates for
multivariate populations ll , Institute of Statistics, University of North
Carolina mimeograph series no. 339 (1962).
NORTH CAROLINA STATE
OF THE UNIVERSITY OF NORTH CAROLINA
AT RALEIGH
INSTITUTE OF STATISTICS
(Mimeo Series available for distribution at cost)
364. Brosh, I. and P. Naor.
On optimal disciplines in priority queueing.
1963.
365. Yniguez, A. D'B., R. J. Monroe, and T. D. Wallace. Some estimation problems of a non-linear supply model implied by
a minimum decision rule. 1963.
366. Dall'Aglio, G. Present value of a renewal process. 1963.
367. Kotz, S. and J. W. Adams On a distribution of sum of identically distributed correlated gamma variables.
368. Sethuraman, J. On the probability of large deviations of families of sample means. 1963.
1963.
369. Ray, S. N. Some sequential Bayes procedures for comparing two binomial parameters when observations are taken in
pairs. 1963.
370. Shimi, I. N. Inventory problems concerning compound products.
371. Heathcote, C. R.
queues. 1963.
1963.
A second renewal theorem for nonidentically distributed variables and its application to the theory of
372. Portman, R. M., H. L. Lucas, R. C. Elston and B. G. Greenberg.
Estimation of time, age, and cohort effects.
373. Bose, R. C. and J. N. Srivastava. Analysis of irregular factorial fractions.
1963.
1963.
374. Potthoff, R. F. Some Scheffe-type tests for some Behrens-Fisher type regression problems.
1963.
375. Mesner, D. M. A note on parameters of PBIB association schemes. 1963.
376. Bose, R. C. and J. N. Srivastava. Multidimensional partially balanced designs and their analysis, with applications to
partially balanced factorial fractions. 1963.
377. Potthoff, R. F. Illustrations of some Scheffe-type tests for some Behrens-Fisher-type regression problems.
378. Bruck, R. H. and R. C. Bose. The construction of translation planes from projective spaces.
379. Hogan, M. D. Several test procedures (sequential and non-sequential) for normal means.
380. van Zyl, G. J. J. Inventory control for perishable commodities. 1963.
381. Smith, W. L. Some remarks on a distribution occurring in neural studies.
M. S. Thesis.
Some multiple comparison sign tests. Ph.D. Thesis.
384. Kuehl, R. O. and J. O. Rawlings.
mating. 1964.
385. Khatri, C. G.
Ph.D. Thesis.
1964.
Estimators for genetic parameters of population derived from parents of a dialle!
Distribution of the largest or smallest characteristic root.
1964.
386. Johnson, N. L. Cumulative sum control charts and the Weibull distribution.
387. Khatri, C. G.
1964.
1963.
1964.
382. Orense, M. and A. Grandage. Estimation of the exponentials in a double exponential regression curve.
1963.
383. Rhyne, A. L.
1963.
1963.
1964.
A note on the inequality of the characteristic roots of the product and the sum of two symmetric matrices.
388. Khatri, C. G. Monotonicity of the power functions of some tests for a particular kind of mUlticollinearity and unbiasedness and restricted monotonicity of the power functions of the step down procedures for MANOVA and independence. 1964.
389. Scott, McKinley. Study of some single counter queueing processes.
1964.
390. Overton, W. S. On estimating the mean ordinate of a continuous function over a specified interval. Ph.D. Thesis.
1964.
391. White, Kathleen. An investigation of small sample properties of Zahl's estimate of a general Markov process model
for follow up studies. Ph.D. Thesis. 1964.
392. Rohde, C. A. Contributions to the theory, computation and application of generalized inverses.
1964.
393. Burton, R. C. An application of convex sets to the construction of error correcting codes and factorial designs.
394. Roy, S. N. A report on some respults in simultaneous or joint linear (point) estimation. 1964.
1964.
395. Potthoff, Richard F. and Whittinghill, M. Testing for homogeneity in binomial, multinomial, and Poisson distributions.
1964.
396. Hoeffding, Wassily. Asymptotically optimal tests for multinomial distribution. 1964.
397. Magistad,
J.
and R. L: Anderson. Some Steady-state and transient solutions for sampled queues.
398. Linder, Arthur. Design and analysis of experiments. 1964.
399. Khatri, C. G. On a MANOVA model applied to problems in growth curve.
400. Naor, P.
On a first-passage time problem in chemical engineering.
401. Basu, D.
On the minimality of a boundedly complete sufficient sub-field.
1964.
1964.
1964.
Ph.D. Thesis.
1964.