756 Part B

LD3921
Stat.
C67
~~~~~~~~
~
PRESENTED TO
~
~
Department of Statistics
North Carolina State University
~
~
by
~
~
..
Dr. R. J. Hader
~
~~~.~~~~~
MINIMUM BIAS APPROXIMATION
OF A GENERAL REGRESSION MODEL
WITH AN APPLICATION TO RATIONAL MODELS
Robert Cote) A. R. Manson and R. J. Hader
Institute of Statistics
Mimeograph Series No. 756
Raleigh - July 1971
iv
TABLE OF CONTENTS
Page
v
LIST OF TABLES • •
vi
LIST OF FIGUR:ES
1.
INTRODUCTION
1
2.
DEVELOPMENT OF THE PROBLEM •
4
2.2
Choice of Minimum Bias Model in C •
Estimation of the min B Model • • •
2.3
Performance of the min B Approach
2.1
g
g
6
9
11
3.
APPROXIMATING RATIONAL FUNCTIONS BY POLYNOMIAL FUNCTIONS • ••
15
4.
SUMMARY AND CONCLUSIONS
35
5·
REFERENCES •
6.
APPENDIX ••
A.l
A.2
A.3
A.4
A·5
A.6
A·7
.....
Theorem 2. 1 •
Theorem 2. 2 •
Theorem 2.3 •
Lemma 2.1 •
Lemma 2.2 ••
Theorem 2.4
Theorem 2.5 •
36
37
38
38
39
40
40
41
43
v
LIST OF TABLES
Page
....
3·1
min B (y) when at = (1,2,4) and no = a + a x •
g
1
O
3·2
min Vlmin Bg designs when nO = a O + a 1x andy = 1. 01
21
3.3
min vlmin Bg designs when nO = a O + a 1x andy = 1. 50
22
3.4
min Vlmin Bg designs when nO = a O + a 1x andy = 5·00
23
3.5
min B (y) when at = (1,2,4) and no = a + a x + a x
1
O
2
g
-
3.6
2
min Vlmin Bg designs when nO = a o + a 1x + a x andy = 1. 01
2
3·7
min Vlmin Bg designs when nO =a + a x + a x
0
1
2
3.8
min Vlmin Bg designs when ~O = a O + arc
+a~
2
2 andy
2 andy
19
24
25
1. 50
26
5·00
27
vi
LIST OF FIGURES
Page
3.1
3.2
3.3
3.4
3.5
3.6
Contours of constant V for
L(N,No,N1 ,N 2 ; y) = L(4,0,1,1; 1.01) . • • . . . . . . • ••
Contours of constant V for
L(N,NO,N ,N ; y) = L(4,0,1,1; 1.50) • • • • • . • . • . . .
1 2
Contours of constant V for
L(N,NO,N1 ,N 2 ; y) = L(8,4,1,1; 5.00) • • • . • . • . . . ••
Contours of constant V for
Q(N,NO,N1 ,N2 ; y) = Q(10,0,4,1; 1.01) . • . . . . . • • ••
29
30
31
32
Contours of constant V for
Q(N,NO,N1 ,N ; y) = Q(5, 1, 1, 1; 1. 50) • • . . . . . • • . . .
2
33
Contours of constant V for
Q(N,N ,N ,N ; y) = Q(6,2,1,1; 5·00) . • . . . . • • • . . .
O 1 2
34
L
INTRODUCTION
The problem of optimum design in regressiorl has drawn the
attention of many writers.
In particular, Elfving (1952), Kiefer
(1959, 1961), Kiefer and Wo1fowitz (1959), Hoel and Levine (1964)
investigated different optimality criteria applying to regression
problems.
Also from a different viewpoint, Folks (1958) and Box and
Draper (1959, 1963) introduced bias considerations in their optimality
criterion.
The latter studied the situation where the model,
llC~.~),
is
a polynomial of degree d + k - 1;
is approximated by a polynomial of degree d _. 1;
The vector
~1
is made up of terms required for the polynomial of
degree d - 1; the vector
~2
is made up of additional higher order
terms required for the polynomial of degree d + k - .1 while
are corresponding vectors of regression coefficients.
£1 is £1
=
variates in
~.i
E[Y. - ll(x. )J2
The estimator
= 0-
2
(i
=
1, 2, ... , N).
=
ll(x.) and
1
.Box and Draper were primarily
interested in finding designs which minimize the mean square error
It
'~2
for the N experimental runs; I is the vector of the N
1
1
and
(X.iXl)-~il where Xl is the matrix of values taken by the
uncorre1ated random variables with E(Y.)
1
~'l
(MSE) of y(x) averaged over some region of interest R, namely,
2
where 0 -1
V
NO
="2
J
R~'
=
JR Var
J can be split into two parts: cT
:=
Ii + B where
[/Iy(~)]~, the averaged vanance
.
/I
of. y(~)
over R; and
0-
B
R.
=
J
N~ R (E[~(~)J - ~(~)}2d~, the averaged squared bias of }(~) over
0In their work they noted that unless V was many times larger than
B, the minimum J designs were remarkably close to those obtained by
ignoring V completely.
design matrix X
=
Thus, they showed that to minimize B alone the
[X :X ] should satisfy
l 2
where X2 is the matrix of values of ~2 and Wlj = 0
JR ~l~j
~
j
==
1, 2.
Equation (1.1) will be called the Box-Draper conditions for minimum B.
Karson, Manson and Hader
this problem.
(1969) presented a different approach to
Accepting the Box and Draper result that B is the
dominating factor in mean square error, they minimized B by choice of
estimator rather than by choice of design.
"
of ~(~) which they obtained was ~(~)
matrix of known constants A
=
The minimum bias estimator
= ~lA(X'X)-X'~
. -1
1
[ I. WIIW12~
where A is a
The symbol S will denote
a generalized inverse of a square matrix S, which satisfies the
equation S
=
SS-S.
"
The estimator, ~(~), achieves minimum B for any
design for which ~ is estimable, where ~'
= (~.r ~2)'
Using the
design flexibility remaining, they found designs with smaller J than
is given by designs which satisfy the Box-Draper conditions.
The following section will generate the Karson, Manson and Hader
method to a larger class of models.
more general situation will be given.
with the Box and Draper method.
Some results which hold in this
This approach will be compared
As a direct application, a brief
3
discussion of the problem of approximating the ratio of polynomials by
simple polynomials will be presented.
illustrated with examples.
The technique developed will be
For these examples} designs satisfying the
bias and variance criteria will be developed.
4
2.
Let
CX, a,/.1.)
DEVELOPMENT OF THE PROBL8M
be a measure space where X i.s a compact set and /.1., a
finite measure defined on the <J-algebra
a of
subsets of I.
Let
L (1.,a,/.1.) be the class of all square integrable, real valued functions
2
defined a.e. on X,
~.~.,
Consider n linearly independent continuous funcUons f , f , ... , f
1
2
n
in L (1.,a,/.1.).
2
vector space
Let 8 , 8 , ... , 8 be n unknown constants in some
1
2
n
@
(In t~e sequel Em
over E1, the field of real numbers.
will denote the m-fold cartesian product of the set of real numbers. )
These quantities will be written as vectors:
If X is a k-dimensional space, its elements are actually k-tuples
(Xl' x 2 ' ... , ~) and any function defined on X is a k-variab.le
function.
However, an element of the k-dimensional space X shall be
denoted by one letter with or without subscript, x.
Over the space X,
a hypersurface in a (k+l)-dimensional space is assumed to be described
by
n
l1(x) ::: J.i:_-.l
e.f.
(x)
J J
:::
e If(x)
- --
(2.1)
,
where the f. (j ::: 1, 2, ..• , n) are known functions.
J
also be regarded as the operability region,
.!.~.,
The set X will
for each x
!::-
X it is
possible to obtain a value of the random variable Y(x) which has mean
The random variables Y(x.) and Y(x.) are assumed to have a
1.
J
5
known correlation apart from a constant ~2.
If' sevel'al x. I s are equal
l
then the corresponding Y(x. )'s will be treated as different random
l
variables.
A tlclos e approximation tl of the true response given in the
equation (2.1) is intuitively appealing.
estimation of all parameters
ej
In situations where the
is expensive, difficult or even
impossible, a convenient approach is to approximate the assumed true
model by a tlrelatively simpler one tl .
Thus, the experimenter selects a
class of such models;
s
'~l Q'.g.(x)}
J=
J J
The g. (j
J
=
with s s; n .
1, 2, ... , s) are linearly independent, ccntinuous, real
valued functions in L , and the parameters Q'j
2
are to be estimated.
~'
E
@ (j
=
1, 2, ... , s)
These quantities will be written as vectors.
= (gl' g2' ... , gs) and
~'
= (Q'l' Q'2' ... , Q's)'
In such an
approach, the error of approximation stems from two sources:
the
sampling error and the bias error due to the failure of the chosen
'TlO(x)
€
C to exactly represent the true response 'Tl(x).
would like to minimize both errors.
Obviously, one
However, the minimization of the
sampling error must take place in the estimation space, by estimation
procedures, and in the operability region, X, by choice of design.
The minimization of the bias error has to be accomplished in the parameter space, @, and in L space by choice of 'Tlo(x)
2
E
C.
Moreover, the
sampling error is defined only after the choice of 'TlO(x) E C is made.
6
Thus, the only alternative is the conditional minimization of the
sampling error,
!.~.,
the minimization of the sampling error given
that the minimum bias has been achieved within C.
2.1
Choice of Minimum Bias Model in C
Let B(n ) denote the averaged squared bias over X due to the
O
choice of nO(x) E C, namely,
(2.2)
where 0-
1
=
Ix dtl(X).
The following minimization
B = minimum B(n o )
nO E C
is equivalent to
B
minimum
(a, g)
E
B(~/~)
®sXL~
s s
where (~,~ ) is an element of the product space ® xL
2
0
s
® is the s-fold
cartesian product of ® and L~, the s-fold cartesian product of L .
2
The minimization of B(~/~) over ®sXL~ is a formidable task if at
all possible~ Even if one reduces the domain of minimization to
®s xMs where M = M (f , f , ..• , f ) is the space spanned by the
n
l
2
functions f. (i
~
= 1, 2, ..• , n), the task is not materially simplified.
To reduce the minimization problem to workable si.ze, it becomes
necessary to fix a set of functions ~'
minimize over ® only.
(i
= 1,
= (gl' g2' •.. , gs) and
Then B will denote the fact that the g.
g
2, ••• , s) may not be an optimum choice.
~
The fWlctions
n,
nO'
7
f., g . are defined on I and their integration will be over X unless
J
J
otherwise specified.
Henceforth, the variate x a.."ld the domain of
integration I shall be omitted.
For a given set of functions (gl' g2' ,._, gs) equation (2.2) is
written as
B
g
(~ 0 ) = n
J (ala -
e/f}2~
-~--
where
(2,4)
are real valued matrices.
The following theorem is appropriate at
this point.
Theorem 2.1
A necessary and sufficient condition for the matrix Wgg to be
positive definite is that the g. (i = 1, 2, ..• , s) be linearly
~
independent.
(For a proof, see Appendix A.l. )
Corrolary 2.1
As defined in equation
(2.4) the matrix
(i)
W is positive definite
ff
(ii )
W/fW-~
f is positive semi-definite
g gg g
(iii)
Wff -
W~fW~~Wgf is positive semi-definite.
8
Using Theorem 2.1, equation (2.3) may be wl'i tten as
L_
B ('1'1 ) ::: (01 - W--W
g
110
S)'W
gg gf-
-
,·1
(cr- W·w
gg -
S)
+
gg gf-
e'(w.1'1" ",
-
'
W' W-·1. W f)6 • (2.5)
g f gg g -
The second term of the R.ll.S. of equation (2.5) is constant in a.
Therefore, from Theorem 2.1, it follows that
min B ::: minimum B ('flo) ::: .@.' (Wff - W' w··lw )e
g
01 € El
g
gf gg gf -
if and only if
01 :::
where A
= W-~
gg gf'
W-~
S::: AS
gg gf-
-
Moreover min Bg is unique.
Choosing the set of
functions g.~ (i == 1, 2, •.. , s) is equivalent to selecting a subclass
Cg c C where
Cg
=
('flo: 'flo
= ~'~,
~ is given} with s s n .
The minimum bias model (min Bg model) in Cg which is used to
approximate 'fl
= .@.'!
is written as
'flo = ~'~,
~
= AS
The choice of functions gi (i
(2.6)
•
= 1,
2, ..• , s) as a subset of
(fl , f , ••• , f n ) is a natural and appealing one, and, if the set of
2
functions gi is a subset of (f , f , ..• , f ) so that ~ ::: f for
n
l
k
2
k
= 1,
A
= [Is: W~iw12J,
!2
2, .•• , s, then the matrix A always take,:,: the form
= (fs +l ,
where W1l = Wgg and W12 ::: 0
f s +2' ••• , f n ).
J 5f2dJl
with
9
2.2
Estimation of the min B Model
g
A question of great interest is the estimability of the min B
g
Assume that a vector of N random
model given in equation (2.6).
variables (observations)
(1)
E[ y(~)J
(2)
E([y(~)
y/(~) =
[Y(x ), Y(x ), .. "
l
2
Y(x )] satisfies
N
Fe
L:cr2 .
F is an Nxn matrix whose (i,j)th element is f.(x.), L: is a NxN
J
1
positive definite matrix, of known constant elements and cr
positive constant.
i.~., ~
F
E ~
= ~
I
The Gauss Markoff Theorem (Scheffe,
2
is a real,
1967)
applies,
is linearly estimable if and only if the design matrix
where
This class
3MB
is also the class of designs which achieve
min B
g
=
e/(w
- W/fW-~ f)e .
g gg g ff
An equivalent necessary and sufficient condition for estimability of
Ae is given in the following theorem.
Theorem 2.2
The vector of parameters
if A' = VIT.
~
=
A~
is linearly estimable if and only
V is a nxr matrix whose columns are the orthonormal
l
characteristic vectors corresponding to the
roots of (FIL:-IF) with s ~
I'
I'
non-zero characteristic
~ n, and T is rxs matrix of full rank.
(For a proof, see Appendix A.2.)
10
This theorem essentially brings out the fact that the process of
linear estimation takes place in a r-dimensional subspace of the
-] )
(F/~ of •
n-dimensional space spanned by the columns of
The following
theorem is then a natural one.
Theorem 2.3
-1
If a
- = AS- is estimable where A = WggWgf is a matrix of full rank,
then the matrix A(F'E-1F) - A' is non-singular.
(For a proof, see
Appendix A. 3. )
Moreover, the columns of A' are in the space generated by the
columns of VI (Theorem 2.2) which is the space generated by the columns
1967, p. 184).
of (FIE-IF)-, (Rao,
Graybill,
Thus, it can be shown (see
1969) that A(FIE-IF)-AI is invariant for any generalized
inverse of (FIE-IF).
For a fixed F
€
3MB
the BLUE of
~
is
and the minimum variance linear estimator of
~
when using the min B
g
model is
~O(F)
Now Var
=
~/A(FIE-IF)-FIE-ly(~) .
~
-1)[ ~O(F)J
= ~/A (F'E
F A/~~2 when
denoted by V , namely,
F
averaged over the region I is
11
The notation Tr[H] will be used to denote the trace of the square
matrix H.
A great deal of design flexibility remains at this point.
One way to use this flexibility is to achieve
min V = minimum VF'
F
E
~
to obtain min V having achieved min B .
~.;;:.,
g
This design flexibility may also be used to satisfy other
criteria, as will be shown in Section 3.
2.3 Performance of the min Bg Approach
To approximate a given function, say ~(x), on the basis of N
experimental observations various methods aimed at satisfying both
bias and variance criteria can be presented.
These methods could thus
be compared on the basis of the compromise they offer between variance
and bias.
Consider the situation where the set of functions
... ,
gi (i
= 1, 2,
f. (i
= 1, 2, ... , n) so that
~
s) is a subset of the functions
~
= f k for k = 1, 2)
...
of
,
s.
The full
model is then written as
'n=e/f
'I
-1-1
+e/f
-2=-2
where ~1!.1 contains the first s terms and ~2£.2' the remaining (n-s)
terms.
The design matrix F = [Fl : F'2] is written so as to match the
partitioning of the
defined:
model~.
The following classes of designs are
12
3
FM
=. {F:
e is estimable}
3MB = {F:
Ae is estimable}
(2.8)
~
= {F:
~l is estimable}
Also, the following notation will be used:
J Var [T1"0 (F)]diL
V*(F) = N~
0"
for F'
E:
:J*
and
(2.10)
Lemma 2.1
If A
=
[AI: A2 J and Al is a sxs matrix of full rank, then a
necessary condition for
A~
to be estimable is that Fl be of full rank.
(For a proof, see Appendix A.4.)
In particular, this lemma applies when A = [I : W-~ fJ.
s. gg g 2
The
following lemma establishes a relationship between the classes of
designs defined in
Lemma
(2.8).
2.2
Using the notation of equation
(i)
~~~~3m1
(ii)
~D ~
(iii)
when N
(2.8)
3MB
=
s,
3MB = ~D·
(For a proof, see Appendix A.5).
13
Theorem 2.4
Using the notation defined in equations (2.9) and
(i)
V
FM
~
(ii)
V
MB
~
(iii)
V s: V
MB
BD
(2~lO)
V
MB
V
RM
=
s.
(For a proof, see Appendix A.6. )
From (iii) of Lemma 2.2, strict equality is achieved in (iii) of
Theorem 2.4 when N = s and ~
f ¢.
Moreover, many examples exist
where the strict inequality holds in (iii) of Theorem 2.4 (see Karson,
Manson and Hader, 1969).
Consider the situation where N
contain more than s parameters.
=
s, then the min Bg model cannot
A logical method of obtaining
protection against bias would be to add more terms to the fitted lnodel.
This is obviously not possible when N
= s.
An alternative method of'
obtaining additional protection against bias would be to add more
terms to the assumed true model and obtain minimum bias protection via
the fitted model.
Unfortunately, this extra bias protection is
accompanied by an increase in variance error as shown in the following
theorem.
Theorem 2.5
To the model given in equation (2.7)
'Tl
III
add t more terms, say
= -1-1
6'f
~3!3'
+ -~2
6'f
= -6'f
-
The new model is then
14
Let Cg
= (il o : ilo = ~'~
class of simpler models,
where ~
=fk
for k
= 1,
2, ,."
s} be the
If V. denotes the averaged variance of the
1
min Vlmin Bg linear estimator of ili for i
a proof, see Appendix A.7.)
=
1, 2, then V S V "
l
2
(For
15
3.
APPROXIMATING RATIONAL FUNCTIONS
BY POLYNOMIAL FUNCTIONS
l
A polynomial form in x over E is
n
P (x:a) ~ '~O a.x
n
1=
1
.
l
with a
nomial form is n and a.
1
E
n
f
Wl
0 if n > O.
expression of the form
The degree of the poly-
El (i = 0, 1, ..• , n).
A rational form is
defined to be the ratio P (x:a)/P (x:b) where P (x:b)
n
m
m
f
0 and P (x:a)
n
are polynomial forms in x of degree m and n respectively.
If x is
regarded as a variable with a given domain, these forms are called
functions, namely, polynomial functions and rational functions.
These
definitions can be generalized to functions of several variables.
For
example, a polynomial function in (xl' x ' ... , x ) is defined
2
n
recursively as a function in x
polynomials.
n
over the ring D \}cl' x ' ••• , x _ 1 of'
2
n l
(For more details, see Birkhoff and MacLane, 1965.)
The
same notation will be used for a polynomial in one or in several
variables, recalling that x may actually represent a product such as
k
t.1
iIll Xi when dealing with a k-variable polynomial function.
Assume that the following functional relationship between the
response n and the factor x holds
P (x:8)
n
n(x) = Rn,m (x:8,y) =P(
m x:y ) ' VXEl,
k
where the operability region 1 is a closed bounded set of E , and
u
v.
Pm
(x·y)
=
'~O
Y.
x
J
'
J=
v.
J
are real valued polynomial functions of degree n and m respectively.
The class of relatively simpler models to be considered is
Cg
16
,..
= (~o: ~o = Ps(X:~) ~
jio ~s.x~j}
wi th s So q
J
where the s. (j = 0, 1, .,,', s) are known integers and the ~s. are
J
1
unknown constants in E .
J
No claim is made concerning any op ti mali ty
criterion in the choice of C
g
this class of models is simple and
illustrates the point that the set of functions g. (i = 0, 1, .•. , s)
l.
need not be a subset of functions f.
(i = 0, 1, ... , q).
l.
Using notation similar to that used in Section 2,
B (~o)
g
with 0-
1
=
J dx
=0
J (p s (x:a)
-
R
n,m
and where the measure
measure defined on the Borel
(x:e,y)}2 dx ,
~ is simply the usual Lebesgue
~-algebra
of subsets of I.
If for
appropriate range of j, one defines
s.
et;g.(x)=xJ;O'
j
J
j
'~a
Sj
then the results of Section 2 apply when restricting
t.
x J
.
P (x:y) E L2 (J = 0, 1, ... , q) and y to be known. The matrix
m
A
= W-~
f where W is a (s+l)x(s+l) matrix whose (i,j)th element is
gg g
gg
o J x].
+s.
].
J
S.
th
J dx and W
gf
s.+t.
o J ~ (x:y)
dx.
is a (s+l)x(q+l) matrix whose (i,j)
The minimum averaged squared bias is
m
min B
g
= ~/(Wff
W' W-lW )e
gf gg gf-
· a (q+
1 ) x ( q+ 1) ma trlX
' wose
h
( l.,JJ
. . \ th e.emen
1
t l8
.
h
Wff l.S
were
o
J'
t.+t.
x].
J 2 dx.
[Pm(x:y)]
element is
17
Assume that the vector of the N
Y'lxJ
=
[Y(x1 ), Y(x ),···
2
1
responses
ObSeri(aj_d(~
Y(x )] satisfies the conditions
N
(1)
E[y(~)J
(2)
E([ (y(~) - T1(~)JLy(~) - T1(~JJ I}
= R~
t.
x. J
· wh ose (1,
. .J )th e.1 ement 1S
.
· a Nx ( q+1) ma tr1X
h
R 1S
were
and
~2 is a real, positive constant.
R E
JMB
BLUE of
For any design matrix
(the class of designs for which min B
g
~
1
Pm(~:Y )
':::--'7"---'~
is achieved) the
~ = A(R'R)-R'Y(~)
and the variance of nO is
~
2
So sl
Ss
Var (T1 ) = El,' A(R 'R) A' El,~ wi th El,' = ( x , x , , .. , x ).
0
is
design matrix R E
JMB
~
the averaged Var (T1 ) over
o
For a fixed
X is
There still remains the flexibility of choosing R
(~.
Gu: way
to take advantage of this flexibility is to choose R so as to achieve
minimum V , or simply to achieve V < s (when possible) to give
R
R
R E ~
improvement over those designs which satisfy the Box-Draper conditions
of equation (l.l).
Moreover, this flexibility may be used to minimize
V wi thin the class of designs which have equal spacing as is
R
illustrated in Figure 3.2.
As illustration, let
(3.1)
be the rational function to be approximated where
known parameters in
E'l,
~O}
~, the controllable variable_,
5]:, 52 are un-
18
equation (3.1) may be written as
where x
€
=
I
[-1, 1], the operability region.
linear estimation procedure and to
~void
In order to use a
undefined terms, it is
necessary to restrict r to be known and Irl > 1.
models is taken to be Cg
= {~O:
~O
The class of simpler
= Pl(x:a ) = a O + a l x }.
The matrix
A = W-~
gg gf' where
1
Wgf - 2
with Z
r
Z
,
~2-rZ)
= log e (r+1) - log e (r-1).
,
Thus, B depends on the specified
g
value of r, and its minimum may be written as
where
1
(L +~)
,
Z1
2
2
(1 +L - rZ) ,
Zl
Z'
1
W =
ff
2
(1 +L + rZ)
Zl
3
(31 Z _ -r + 2)')
2
Zl
2
4
Symmetric
with Zl = r
any value
2
- 1.
of~.
(1: + 3r
3
2 +L _ 2)'3 )
Z
Zl
Numerical values of min Bg(r) can be computed for
For example, if 9'
= (1, 2, 4), min Bg (r) has been
computed for different values of r as shown in Table 3.1.
19
min Bg (1) when -a1
and 110 = ~O + Q'lx
Table 3.1
= (lj2,4)
minB (/,)
g
'1
1.01
320.9836
1.50
1.1653
5·00
0.0510
conditions
= Ri!.
(1 )
E [y (!)J
(2)
E{ [ y (!) -
11 (~ )] [y (~)
11 (~ )]' }
-
where R is an Nx3 matrix whose (i,j)th element is
--~
r + x.
wld rr
2
is a
1
real, positive constant.
For a given matrix R such that
estimable, the averaged Var
Let
~
[11"oJ
is
over [-1, IJ is
be the class of 3, 4 and 5 point designs which are
symmetric with respect to the origin.
as follows:
~
N
= NO
+ 2N
l
+ 2N
The N observations are spaced
where NO is the number of observations
2
taken at x = 0, N is the number of observations at x = ~ t , and N
l
l
2
is the number of observations at x
~
t •
2
The above designs can be
better described by this following schematic presentation:
N2
Nl
NO
-t l
0
N1
N2
tl
t2
--il)(~---tl---+I---ill---t-I---+I----... x .
-1
-t 2
+1
20
For this class of designs, minimum V has been computed and
R
R
€ ~
= 3, 4, ... ,
min Vlmin B designs tabulated for N
g
These designs
15.
Tables 3,2, 3.3 and 3.4 give
depend on the particular value of,..
such designs for r = 1.01, 1.50 and 5.00 respectively.
These tables indicate that more and more design points are
forced to the boundary of the operability region X as the value of r
increases,
as r increases the bias decreases and therefore has
i.~.,
less and less influence on the choice of design.
For the following class of simpler models
1
o
wgg = o
1
I
3
3
o
1
z,
3
(2-,.Z)
2
, (,.Z-2)')
2
()' Z-2)')
o
1
222
(-s+2/ -)' Z)
()' 4Z_2)'3 ~2)')
Symmetric
5
3
For the same set of parameters used in the previous example,
e/
=
i.~.,
(1,2,4), min B (r) appear in Table 3.5.
g
From the Tables 3.1 and 3.5 one notices that for a fixed value of
r,
min Bg (r), when approximating
when using a quadratic model.
n by
a linear model, is larger than
In general it is easy to show that
B (L,r) ~ B (Q,r) where B (L,,.) and B (Q,)') denote min B ()') when
g
g
g
g
g
approximating
n by a
linear and a quadratic model respectively.
Using the class of designs described in (3.2), minimumV
R
R
E
~
has been computed and the min V\min B designs appear in Tables 3.6,
g
21
min Vlmin B designs when 1]0 ==
g
and y = 1. 01
Table 3.2
Ql
O \-
C1' X
l
N
NO
N1
.t1
N2
.t 2
min V
p
3
1
1
0.8901
0
------
1.8362
3
4
0
1
0·5788
1
0·9160
1.7157
4
5
1
1
0·7136
1
0·9245
1. 7099
5
6
0
2
0.6972
1
0·9398
1. 6503
4
7
1
2
0·7518
1
0.9425
1.6669
5
8
0
1
0·7395
3
0.9534
1.6206
4
9
1
3
0·7713
1
0·9543
1. 639'7
5
10
0
4
0·7634
1
0·9629
1.6015
4
11
1
4
0·7847
1
0.9629
1. 6202
'5
12
0
5
0·7802
1
0·9702
1·5873
4
13
1
5
0·7954
1
0.9698
1. 6049
5
14
0
6
0.8673
1
1. 0000
1. 5730
4
15
1
6
0.8739
1
1.0000
1.5923
5
p = number of des tinct design points
22
min Vlmin Bg designs when '110 = Cl'0 + Cl'lX
Table 3.3
and
'1 = 1. 50
N2
.t 2
min V
p
0·7908
0
------
1. 8748
3
1
0.3874
1
0.8428
1. 8583
4
1
1
0.5602
1
0.8711
1. 8574
5
6
0
2
0·5013
1
0·9130
1. 8415
4
7
1
2
0.5846
1
0·9276
1. 8453
5
8
0
3
0.5478
1
0·9681
1. 8298
4
9
1
3
0.6022
1
0·9780
1. 8349
5
10
0
4
0·5760
1
1. 0000
1. 8203
4
11
1
4
0.6155
1
.1. 0000
1. 8263
5
12
0
5
0·5911
1
1. 0000
1.8176
4
13
1
5
0.6226
1
1.0000
1. 8236
5
14
0
6
0.6008
1
1. 0000
1. 8194
4
15
1
6
0.6272
1
1. 0000
1. 8246
5
N
NO
N1
.t1
3
1
1
4
0
5
p = number of distinct design points
23
min Vlmin B g designs when 'nO
and y = 5" 00
Table 3.4
= Cl'0 + Q'l x
N
No
N1
1,1
N
2
1,2
min V
p
3
1
1
0.8106
0
------
1. 8772
3
4
0
1
0.0889
1
1.0000
1. 7956
4
5
1
1
0.3993
1
1. 0000
1.8147
5
6
2
1
0.6156
1
1. 0000
1.8255
5
7
3
1
0.8346
1
1. 0000
1. 8193
5
8
4
2
1.0000
0
------
1. 7939
3
9
3
1
0.3656
2
1. 0000
1. 8040
5
10
4
1
0.6038
2
1. 0000
1.8112
5
11
5
2
0.9814
1
1. 0000
1,8106
5
12
6
3
1. 0000
0
------
1. 7939
3
13
5
1
0.3236
3
1. 0000
1.8001
5
14
6
1
0.5871
3
1. 0000
1.8281
5
15
7
4
1. 0000
0
------
1.8034
3.
p = number of distinct design points
24
Table 3.5
3.7 and 3.8.
'1
min B (:;)
g
1. 01
251.6719
1.50
0.0754
5·00
0.0005
Each table corresponds to a particular value of '1,
!.~.,
r = 1.01, 1.50 and 5.00 respectively.
In the Box and Draper approach
when min Bg is achieved, min V
(the number of parameters in the reduced model).
=s
The design
flexibility given by the min B method allows one to always obtain
g
min V ~ s [Theorem 2.4 (iii)] and in many situations, min V < s for
infinitely many designs.
As illustration, variance contours have been
drawn for six designs which are shown in Figures 3.1 through 3.6.
L(N,N ,N ,N ; r) denotes the min Vlmin B design with a total of N
O l 2
g
observations, which has been used to approximate R ,1(x:e,r) when
2
is a linear model.
Similar notation is used when
model, !.~., Q(N, NO' Nl ,N 2; '1).
no
nO
is a quadratic
Another way of taking advantage of the
design flexibility offered by the min B method is to choose the
g
min Vlmin B design within the class of designs having equal spacing
g
which is given by the relations £2 = 2£1 or £1 = 2£2 as shown in
Figure 3.2.
25
Table 3.6
N
No
N1
1,1
N2
1,2
min V
p
3
1
1
0·9412
0
------
3.3145
3
4
0
1
0·7201
1
0·9730
2.1807
4
5
1
1
0·7788
1
0·9768
2.0986
5
6
0
2
0.8585
1
1. 0000
1. 8194
4
7
1
2
0.8743
1
1. 0000
1. 8337
5
8
0
3
0.8623
1
1.0000
1.7000
4
9
1
3
0.8741
.1
1.0000
1·7238
5
10
0
4 . 0.0645
1
1.0000
1.6423
4
11
1
4
0.8739
1
1.0000
1. 6661
5
12
0
5
0.8659
1
1.0000
1.6088
4
13
1
5
0.8736
1
1. 0000
1.6310
./
14
0
6
0.8667
1
1.0000
1.5874
4
15
1
6
0.8732
1
1.0000
1.6077
5
p = number of distinct design points
c;
26
Table 3.7
min Vlmin B . designs when
g
11 0 = 0'0 + O'lx + cx 2x
•
N
NO
N1
.t1
N
2
3
1
1
0·9409
4
a
1
5
1
6
2
and y = 1·50
.t 2
min V
p
a
------
2.6938
3
0.2723
1
1. 0000
2·3120
4
1
0.4662
1
1. 0000
2.2474
5
0
2
0.4576
.1
1. 0000
2.2270
4
7
1
2
0.5385
1
1. 0000
2.2559
5
8
a
3
0·5093
1
1. 0000
2,2733
4
9
1
2
0.3820
2
1.0000
2.2633
5
10
a
3
0.4014
2
1.0000
2.2351
4
11
1
3
0.4613
2
1. 0000
2.2322
<::
12
a
4
0.4576
2
1. 0000
2.2270
4
13
1
4
0.4996
2
1. 0000
2.2392
5
14
a
5
0.4892
2
1. 0000
2.2442
4
15
1
4
0.4199
3
.1.
0000
2.2384
5
p = number of distinct design points
/
27
Table 3.8
min V\min B designs when
g
T1 0 = Ct'o +
(;fIX
-+ 0l2x
2
and "! = 5· 00
_.
p
1,2
min V
p
0
------
2.4215
3
0.8888
1
1.0000
2.1520
4
1
0.1970
1
1.0000
2.2268
5
2
1
0·5122
.1
1.0000
2·3253
:)
7
3
2
1.0000
0
---'----
2.1931
3
8
4
2
1.0000
0
-_._"' ...
.....
2,1452
3
9
5
2
1.0000
0
_..... --_ ..... -
2.1696
3
10
4
1
0.3348
2
1.0000
2.2460
5
11
5
3
1.0000
0
------
2.165l
3
12
6
3
1. 0000
0
------
2.1452
3
13
7
3
1.0000
0
------
2.1563
3
14
8
3
1.0000
0
------
2.1867
3
15
7
4
1.0000
0
------
2.1562
3
N
NO
N1
3
1
1
1.0000
4
0
1
5
1
6
1,1
N2
= number of distinct design points
.-
28
If the ! priori knowledge on l is diffuse rather than sharp, so
that only a range for y can be specified, then one could use the
design flexibility available to achieve min B for a grid of values
g
of y, spread over the rWlge specified.
29
1.0
.....c=..
Q·9'-
CD 1. 76
1.8
2.0
V
0.8 -
0·7
~
-
\.0
t-
0.6 -
...-l
E)
J~
'0·5
1.~
0.4
V
0.3
2.0
0.2
0.1
0.0
0.0
~
I
0.1
0.2
.1
0.3
~
.
0.4
I
0.5
r .
0.6
Q.7
0.8
.t1
Figure 3.1
Contours of constant V for
L(N,No,N1 ,N2 ; r) = L(4,0,1,1; 1.01)
•
0.9
1.0
30
1.0
0·9
2.0
0.8
/
0·7
/
J
'1
0.6
fJ
qy
1.
2
0/
0·5
"t'
0.4
0·3
/
0.2
0.1
/
/
/
/
l'
/
/
""
./
,iI?
~.,;'
./
;""
/
./
./
.,;'
Figure 3.2
0·7
o.
Contours of constant V for
L(N,N ,N1 ,N 2 ; 7) = L(4,0,1,1; 1·50)
O
31
L79
LO
2.0
0·9
0.8
0·7
0.6
1.
2
0·5
0.4
0.3
0.2
0.1
0.0
0.0
0.1
0.2
Figure 3.3
*
0.3
0.6
0.7
0.8
0.9
Contours of constants V for
L(N,NO,Nl'N 2 ;)') = L(8,4,1,1; 5·00)*
The 5 point design collapses to a 3 point design because 1.1 = L2
1.0
32
1.64
1.0
2.0
3. o
--:::..
-~
0·9
• 1.64
0.8
0·7
2.0
0.6
0.4
3.0
0.2
0.1
0.0
0.0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
.e l
Figure 3.4
Contours of constant V for
Q(N,N ,Nl ,N ; ',Y) = Q(10, 0, 4, 1; 1. 01)
o
2
1.0
33
1.0
0·9
.....------_.........
2,10
~.,~,~~~""""""'-...,.---
. - - - - 2·5-
....
-
.....- - 3.0 - - - - - - - - - 0.8
1,
2 0.5
0.4
0.3
I
0.2
0.1
0.0 a..._...._ _....._....,._ _......
_ _ ........,............
0·7
0.8
0.9
1.0
_....,..,~,
0.0
0.1
0.2
0.3
0.4
0.5
0.6
~
1,1
Figure 3.5
Contours of constant V for
Q(N,NO,N l ,N 2 ; 7) = Q(5,l,1,1; 1·50)
34
--
l.0
2.6
0·9
3.0
0.8
0·7
.
0.6
2.33
)
0.4
0.3
0.2
0.1
0.0
0.0
I
0.1
0.2
0.3
0.4
0.5
1,
Figure 3.6
006
0,7
0.8
1
Contours of constant V for
Q(N,N ,N1 ,N ; y) ~ Q(6,2,1,1; 5.00)
O
2
0.9
1.0
35
4.
SUMMARY AND CONCLUSIONS
A general regression model
~(x)
relatively simpler model ~O(x).
is approximated by the BLUE of a
The choice of ~o(x) within a fixed
ft
class is made so as to minimize the bias.
The approximation
~o(x)
subject to satisfying the bias criterion, obtains minimum variance.
It is not necessary that the relatively simpler model ~o(x) be made
up of terms of the full model.
applied to a situation where
The minimum bias technique has been
~(x)
is a rational function and
~o(x)
is
ft
a simple polynomial function.
Examples have been given where
satisfies both bias and variance criteria.
~o(x)
Illustration of the great
deal of design flexibility allowed by this method has been provided,
in particular:
(1)
An infinite number of designs achieving minimum
bias also give a V < s (variance obtained by
designs satisfying the Box-Draper conditions
when the reduced model contains s terms).
(2)
An infinite number of designs achieving
min B give V < s and allow equal spacing of
g
design levels.
(3)
It is possible to obtain designs which will
give min B for several values of
g
r,
~.~.,
protection against inexact knowledge of r
may be obtained by gridwise protection over
a range of r values.
36
5.
REFERENCES
Birkoff, G. and S. MacLane. (1965). A Survey ef Modern Algebra,
Third Edition, The MacMillan COll1paJ1Y, New York.
Box, G. E. P. and N. R. Draper. (1959). A Basis for the Selection
of a Response Surface Design. Jour. Amer. Statist. Assoc.
54 :622-654.
Box, G. E. P. and N. R. Dr",per. (1963). The Choice of a Second
Order Rotatable Design. Biometrika, 50:335-352.
Elfving, G. (1952). Optimum Allocation in Linear Regression Theory.
Ann. Math. Statist. ~~:255-262.
Folks, D. L. (1958). Comparison of' Designs for Exploration of
Response Relationships. Unpublished Ph.D. Thesis, Iowa State
College, Microfilm, Inc., Ann Arbor, Michigan.
Gantmacher, F. R. (1960). The Theory of Matrices.
Publishing Comp~ny, New York.
Volume 1, Chelsea
Graybill, F. A. (1969). Introduction to Matrices with Applications
in Statistics, Wadsworth Pu.blishing Company, Belmont, California.
Hoel, P. G. and A. Levine. (1964). Optimal Spacing and Weighing in
Polynomial Prediction. Ann. Math. Statist. 35:1553-1560.
Karson, M. J., A. R. M~nson andRe J. Hader. (1969). Minimum Bias
Estimation and Experimental Design for Response Surfaces,
Technometrics. 11:461-475.
Kiefer, J. (1959). Optimum Experimental Design.
Soc., (Series B) 21:272-319.
Kiefer, J. (1961).
Math. Statist.
J. Roy. Statist.
Optimum Designs in Regression Problems II.
Ann.
32:298-325.
Kiefer, J. and J. Wolfowitz. (1959). Optimum Design in Regression
Problems. Ann. Math. Statist. 30:271-294.
Rao, C. R. (1967). Linear Statistical Inference and its Applications.
John Wiley and Sons, New York.
Scheff~, H. (1967). The Analysis of Variance.
London, England.
John Wiley and Sons,
37
6.
APPENDIX
38
A.l
Theorem 2.1
A necessary and sufficient condition for the matrix W
to be
gg
positive definite is that the g. (i
1
~
I, 2 j
.'WJ
s) be linearly
independent.
Proof
Since Wgg is positive semi-definite, it suffices to prove that
Wgg is not singular (Gantmacher, 1960, p. 305).
To prove the non-
singularity of W is equivalent to proving the following:.
gg
necessary alld sufficient condition for the matrix W
gg
is that the g. (i
1
~
to be singular
1, 2, .•. , s) be linearly dependenL
= 1,
Assume that the g. (i
1
2,
"'J
s) are linearly dependent,
then there exist a vector of constallts .£
a.e. on X.
a
f 2
such that ~it~(X) = 0
Therefore
So W is not a positive definite matrix.
gg
However, since W
gg
is
positive semi-definite, it must therefore be singular.
Assume now that the g. are linearly independent,
1
of constants ~}
Q,
~'~(x) }
0 a.e. on X.
For any vector
Hence,
Thus W is positive definite and is therefore non-singular.
gg
A.2
Theorem 2.2
The vector of parameters
A'
= V1T.
~
= A~ is estimable if and only if
Vl is a nxr matrix whose columns are the orthonormal
39
characteristic vectors corresponding to the
roots of (F/L-~) with s ~ r ~
DJ
l'
non-zero characteristic
and T is a rxs matrix of full rank.
Proof
We know that AfJ is estimable if and only if the rows of A are in
-1
the columns space of (F/ L
I
F), (Scheffe, 1959).
Since the
characteristic vectors corresponding to the non-·zero characteristic
roots of (F/L-1F) span the same space as do the columns of (F/L-.lF),
then we have the required
A.3
~esult.
Theorem 2.3
If
is estimable and A
01
::=
W··1
. W , is of full rank, then the
gg gf
matrix A(F/L-~rA/ is non-singular.
Proof
Assume that A(F/ Lvector ~
! Q such
1
:F'f Ai
is singUlar.
Then thE're exists a
that
(A.3.1)
Using the fact that
~
is estimable
~Dd
premultiplying by
~/J
equation (A.3.1) is written as
(A.3.2)
Since L- l is positive definite, there exists a non-singular matrix P
-1
such the L
= PP/.
Thus equation (A.3.2) may be written as
40
From equation (A.3.3) it follcws that
-1F) A'~
P'F(F'~
Premultiplying equation (A.3.5) by]"P
=0
j
gives
(A.3.6)
Since 9: is estimable, equation (A.3.6) becomes
AId
0
which is a contradiction since A is of full rar:.kc
Therefore
A.4 Lemma 2.1
necessary condition for
Ae to be estimable is that.
li\
be of full
I'8l1k.
Proof
Assume that
estimable.
least s.
A.5
Ae is estimable.
'l'hen A ""
CF' \l-rhere
C is an sxN
Since the rank of A is s, the ran.k of F1 must be of at
1
Thus F is of full rmlk.
1
Lemma 2.2
Using the notation of equation (2.8)
(ii )
:J.BD
(iii)
When N = s, ~ = 3 BD '
~ ~
41
Proof
(i)
If
e
is estime:ble t.hen A8 is also..
A necessary condition
(J.. emma
for A8 t.o bE' estimable iu that 1'1 be of full rank.
2.1.)
(ii)
Her•.t"~·!L.
C ''I:
..,
~ ~J:<M cJ'MB ~ ~
~1\M
Therefore _8_ is es timal,le.
1
l
The estimability condition of A8 is A(F"S)-FfF "'. A.
== [I :
s.
where t,
-
(F'F' )-F/F (I
1 1
1 2
- t,-t,) +
s
W-~
f' t,-t,] (A.5.1)
gg g 2
'.,
,.' ,
..·1. ,. ,'"
== [F'I"J - F 'f', (ii'P ) . F'l'1 ',:>],:rtus, if
2
i
l
2
~..-
But
-
r
E:
~D"
~b
equation (A.5.1) becomes
Hence F c J MB and JeD ~
(iii)
When N == s and~.
<'lY1B -
¢,
JMB'
the result obviously holds.
ever, when N == s and ~
Then the only way for
F
A.6
Theorem
J
E
ED
.
Ae
Therefore
! ¢,
one can wIite A(F'F)-F/F as
to be estimable is for
~ ==
:J.sD when N
s.
2.4
Using the notation defined in equation~ (2.9) and \2,10)
(i)
V
:?: V
pM
MB
(ii)
V
MB
~
V
RM
(iii )
V
MB
S;
V
BD
C'
o.
How~
42
Proof
(i )
For any F
~;:'M)
to·.
~.
I
1'1' -J
VFM( F) -- N. Tr [(H"F\,oolW
Therefore,
VFM(F) - VMB(F) ""
l~'Tr[(F'Firl(w
.
. ff ..W'gfW~lW
gg gf )J.
By Coro.1lary 2 • .1, it follows that VyMPn ~ V (F').
1vffi
Since3'.FM \;;
(ii)
(Lenuna 2.2, (i))", VF'M ~ VMB "
JrvIf:'
For aL'.y F "" [P.1: F 2J
E-:
3'rvrB' 1"1 is of f1J.11 rank (Lemma
2. .1) and
liT, 'rrCA ('IN F) "'A' T,Jgg ]
... ,
";".l...
...
_.,
_'"
.~
One can write A (pI F fA' as
A(F/F)-AI = (F/F )-.1 .1 .1
where /:, -
Q'/:, Q
= [F,;_];'2 - F/F (F"I' r.1F'F'')f is a positive semi2.1
.1 1
.1 ".'
definite matrix and Q = [Y?.1(Ftp'.l)-·.l. - W/fW:1J.
g 2 t,g
fore, for any F
vMB (F)
E-
~B
- 'I\RM'IF).
= N·Tr['Q'b.··Q]
Hence Lemma 2.2 (i) gives V.
MB
(iii)
When F
(A. 6.1)
E
There-
0
2
.2 '/
RM
,
3.sD' the strict equa.li ty holds in equation
8...Yld
(A.6• .l)
43
(li), it follows that VMB s:
From Lemma 2.2
A.7
Mo = s.
V
Theorem 2.5
To the model given in equation (2.7)
~
'11
add t more terms, say
Let Cg
= -1-1
e'f + -2=-2
e'f = e'f
-
~3f3'
= (Tlo~ Tl o = ~/~
class of simpler models.
The new model is
where ~
= fk
(k
= 1,
2, "0' s)} be the
Let V. denote the averaged variance of the
~
= 1,
min Vlmin Bg linear estimator of Tl i (i
2).
The Vi s: V '
2
Proof
Let the design matrix F*
= [F: F3 J
be partitioned so as to match
the partitioning
of the model Tl 2• Then Vl = N.Tr[Al(F/F)-AlWgg J
,
where Al'~ = [r : W-~ f J and V = N.Tr[A(F~F*)-A/W J where
s. gg g, 2
gg
2
A
= [Al :• W-~
f' J. " The
ggg3
matrix A(F,,:F*f A' may be expanded as follows:
'
1 - U/A-U
, A(F*F*)-AI = Al(F/F)-A
[A (F/F)-F/F - W' W-1J and A- = [F/F - F/F(F/F)-F/F J1
3
gf3 gg
3 3
3
3
is a positive semi-definite matrix. Therefore
where U
=
44
and
Hence V2
;0:
VI'
NORTH CAROLINA STATE UNIVERSITY
INSTITUTE OF STATISTICS
(Mimeo Series available at 1¢ per page)
702.
WEGMAN, EDWARD J. and D. WOODROW BENSON, JR. Estimation of the mode with an application to cardiovascular physiology. August 1970. 15
703.
LEADBETTER, M. R. Elements of the general theory of random streams of events. August 1970. 21
704.
DAVIS, A. W. and A. J. SCOTT. A comparison of some approximations to the k-sample Behrens-Fisher distributions.
August 1970. 10
705.
RAJPUT, B. S. and STAMATIS CAMBANIS. Gaussian stochastic processes and Gaussian measures.
706.
MOESCHBERGER, MELVIN L. A parametric approach of life-testing and the theory of competing risks. August 1970.
24
105
707.
GOULD, F. J. and JOHN W. TOLLE. Optimality conditions and constraint qualifications in Banach space.
708.
REVO, L. T. On classifying with certain types of ordered qualitative variances: an evaluation of several procedures.
709.
KUPPER, LAURENCE and DAVID KLEINBAUM. On testing hypotheses concerning standardized mortality rations
and/or indirect adjusted rates. 13
710.
SCOTT, A. J. and M. J. SIMONS. Clustering methods based on likelihood ratio criteria.
71 1.
CAMBANIS, S. Bases in L. spaces with application to stochastic processes with orthogonal increments.
712.
BAKER, CHARLES R. On covariance operators. September 1970.
713.
RAJPUT, B. S. On abstract Wiener measures.
714.
SCOTT, A. J. and M. J. SIMONS. Prediction intervals for log-linear regression.
19
200
19
9
11
14
15
715.
WILLIAMS, OREN. Analysis of categorical data with more than one response variable by linear models.
716.
KOCH, GARY and DONALD REINFURT. The analysis of complex contingency table data from general experimental
designs and sample surveys. 75
112
717.
NANGLIK, V. P. On the construction of systems and designs useful in the theory of random search.
718.
DAVID, H. A. Ranking the players in a round-robin tournament.
719.
JOHNSON, N. L. and J. O. KITCHEN. Tables to facilitate seeking SB curves II.
720.
SARMA, R. S. Alternative family planning strategies for India; a simulation experiment. Thesis.
721.
SUWATTEE, PRACHOOM and C. H. PROCTOR. Some estimators, variances and variance estimators for point
cluster sampling of digraphs. November 1970. Thesis. 211
722.
DIONNE, ALBERT and C. P. QUESENBERRY. On small sample properties of distribution and density function estimators. November 1970. Thesis. 61
723.
EVANS, J. P. and F. J. GOULD. Stability and exponential penalty function technique in non-linear programming.
18
724.
HARRINGTON, DERMOT. A comparison of several procedures for the analysis of the nested regression model.
725.
QUALLS, CLIFFORD and HISAO WATANABE. An asymptotic 0-1 behaviour of Gaussian processes. 1971.
726.
SEN, P. K. Convergence of sequences of regular functionals of Empirical distributions to processes of Brownian motion.
727.
CRAMER, HARALD. Some personal recollections of the development of statistics and probability.
728.
BAKER, CHARLES R. Joint measures and cross-covariance operators.
729.
NEELEY, DOUGLAS LEE ROY. Disequilibria and genotypic variance in a recurrent truncation selection system for an
additive genetic model. Ph.D. Thesis. 1971
730.
WEGMAN, EDWARD J. and CHRIS R. KUKUK. A time series approach to the life table.
731.
MOORE, GEORGE WILLIAM. A mathematical model for the construction of cladograms. Ph.D. thesis. 1971. 262
732.
BEHBOODIAN, JAVAD. On the distribution of a symmetric statistics from a mixed population. 1971.
733.
BEHBOODIAN, JAVAD. Bayesian estimation for the proportions in a mixture of distribution.
734.
SEN, P. K. Robust statistics procedures in problems of linear regressions with specific reference to quantitative bio-assay,
II.
735.
CAMBANIS, STAMATIS. Representation of stochastic processes of second order and linear operations.
736.
QUALLS, CLIFFORD and HISAO WATANABE. Asymptotic properties of Gaussian processes.
737.
ANDERSON, R. L. and L. A. NELSON. Some problems in the estimation of single nutrient response functions.
11
20
Both terminals known.
21
132
13
34