PROTECTING MAIN EFFECT M:>DELS
AGAINST TWO FACTOR INTERACTION BIAS
IN FRACTIONS OF 2 r FACTORIAL DESIGNS
by
CHARLES KENNETH BAYNE
Institute of Statistics
Mimeograph Series No. 919
Raleigh - March 1974
iv
TABLE OF CONTENTS
Page
1.
INTRODUCTION. , . . "
. .
2.
DERIVATION OF THE ES1'IMATOR
2.1 Experimental Design Model
2.2 Existence of The Minimum Bias Estimator
2.3 Relationships Between The Minimum Bias Condition
and The Box-Draper Condition
3.
2
r
FACTORIAL EXPERIMENTS
3·1 Second Order Model
3·2 Fractional Factorial Designs
3·3 Approximating A Second Order Model With A First
1
6
6
7
13
17
17
20
Order Function
24
4.
SUMMARY AND CONCLUSIONS
48
5.
REFERENCES
50
6.
APPENDIX. •
52
6.1
6.2
6,3
6,4
Regional Moments
Variance Term V
Theorem 2.2 •
Theorem 2.3 . .
53
53
53
54
1.
INTRODUCTION
Many experimenters encounter the problem of studying a process
involving variables defined on a region "R" called the region of
experimentation.
Often a mathematical model which is a function of
the process variables and parameters is used to approximate the response of the process over such a region,
R
Discrepancies between
the approximate response and the true response of a process stem from
two sources
1.
the experimental or sampling error
(~.~.,
variance
error), and
2.
the inadequacy of the approximating
function to represent exactly the true response
model (i.~., bias error).
Even when an experimenter
kno"rs
the form of the true response model,
he may still want to use a different form for the approximating
function for simplicity or because of constraints on his time and
cost.
To minimize the effect of variance error and bias error, the
experimenter needs to know how to conduct an experiment to estimate
the parameters for his simpler approximating function.
Prior to 1959, many experimental designs were proposed to
minimize the variance error of the approximating function without regarding the bias error.
In 1959, Box: and Draper considered both the
variance error and the bias error by studying designs that minimize
the mean square error (M.S.
E.)
averaged over the region of
2
experimentation
rnl(;Y investigated til,,::
R.
cp(~) = ~:lJ~'l + ~~2 .'
response can be represented as a polyclOmial;
x e R , of degree
d
The vector
K - 1
t-
terms required for a polynomial of degrefc
of the additional terms from order
and
~2
approxim8t,~d
response polynomial is
d -
v!i t'c
1
f)_I assmning that
E (e. e.)
1 J
MoS.E
=:
0
d -
1
and
x
d + k.
1
of coefficients
by a polynomial.;
a vector
J.S
-2
'I'he true
~{E.l
Y =
of'
13 = 0
-2
l-
r(::~}JonE'·e
~(x~) + e.
for
where
1
--.1
at the i
th
experimental point (i
2
E ( e.) =
E(e.) = 0 ,
1
O'
2
1
,
= J,2,
and
ci , then Box and Draper defined the average
i:j:
over R to be:
(1- 1)
J
where
~l
with
b'c'l:lg Uk ordinary lea ,it. squaees estimator of
b
-1
If the observed
n) is y.1 =
is made up of' the
~l
to order
vectol';~
being the curTt'sponding
degree
d
wbencu" ,,'ue
~;i.t\)ation
[2
-1 =
fR
d.X
Tbe average M.S.E. is the sum of a bias term (11»)
c
and a variance term (V)
(:!:.o ~o,
,J
B + V ), where
(10 2)
B
the average square bias over
R, and
v
(103 )
the average variance
(lVel'
experimental design
R
:*
R.,
'i'ri tnrecl":::c.. t.. to tilE;
, the minimization of
tribution of' the rela.tivt;' :llagnitudes of
T
.1:>
,J
cnOl,~e
oj' an
"depends on the con-
V,
Box and Draper
found that unless
V is at least four times as large as
design moments for the minimum
sign moments for the minimum
J
B, the
design are nearly equal to the de-
B design, ignoring the variance term
V
completely.
Because the contribution of
B dominates the contribution of
V
to the average M.S.E., Box and Draper proposed that:
~l
1.
estimate
by least squares, and
2.
find an experimental region
the bias term
R*
which minimizes
B.
is a matrix of the design points in
R* and if
ordinary least squares estimation is used, then a necessary and
sufficient condition for a design to minimize the bias term is:
(1.4)
where
M..
lJ
i
=
1
(X~X. )
n
1 J
:C j
i
and
1, 2,
H..
lJ
and
j
1, 2 .
The condition on the design momemts given in (1. 4) is called the BoxDraper condition.
Karson, Manson and Hader
in a different manner.
1,
estimate
and
(1969)
approached the minimization of
They proposed that:
~l
so as to minimize the bias term
B,
J
4
2.
find an experimental design
R*
which minimizes
the variance term.
With this method, they were able to find designs with smaller values
of
J
than those designs which satisfy the Box-Draper condition.
In each of the following applications of the Karson, Manson, and
Hader method, the approximating function is a low order polynomial
with variables defined over a continuous space.
TRUE
MODEL
1-
2.
APPROXIMATING
FUNCTION
HIGHER ORDER
POLYNOMIALS
cx
cp = a + be
AUTHORS
LOWER ORDER
POLYNOMIALS
KARSON, MANSON,
AND HADER (1969)
LOW ORDER
POLYNOMIALS
PARISH (1969)
3.
MDCTURE PROBLEM:
POLYNOMIALS
OVER REGULAR
N-GONS
LOW ORDER
POLYNOMIALS
OVER REGULAR
N-GONS
PAKU, MANSON,
AND NELSON (1971)
4.
RATIONAL
FUNCTIONS
LOW ORDER
POLYNOMIALS
COTE, MANSON
AND HADER (1973)
This dissertation is concerned wi th applying the Karson, Manson,
and Hader method to the classical experimental design situation where
there are mapy factors each having a finite number of levels.
this situation the region of experimentation
For
R consists of a finite
number of lattice points.
The general results derived for the classical design situation
are applied to the problem experimenters have in using a simple
approximating function in
21'
factorial experiments.
An experimenter
5
may d~~si.::'c; to use a main effects model (~.. ~., a first order functi on)
consisting of the estimated mean and main effects to approximate a
process while realizing that two-factor interactions may not be
negligible.
The experimenter has the uSual recourse of performing a
lea~,t
resolution IV desigrl so that all the ordinary
squares estimators
of the main effects are not biased by two-factor interactions.
standard method does minimize the bias term
B
This
for regular fractions.
However, irregular fractions may ha'Je resolutio.n IV, but fail to
minimize the bias
minimize
B
due to the two-factor interactions.
In order to
B, the experimenter can use an .irregular fraction that
allows the existence of minimum bias estimators of the mean and main
effects in the approximating function.
Tables and criteria are
presented for irregular fractions so that an experimenter can choose
from among those designs which allow the existence of minirrrwn bias
estimators.
6
2.
2.1
DERIVATION OF TffE ESTIMATOR
Experimental
An
Desig~
Model
experiment is to be performed w.ith a set of variables that
have a finite number of levels
The experimental points are selected
0
R, which is a set of points re-
from a region of experimentation
presenting all combinations of the levels of the expeYimental
. bl es·
varla
point in
Th e reglon
.
. tS
a se t o.t' m- 1. a,tt·,lce pOln
D'
L\
lS
I'll.
th the ...; th
(~~. L1:. ~l' '.)
R being represented by a vector
~
i
=
Two examples of such a region are:
1, ..". , m
E
for
R ,
the set of points
from an N-way classification experiment where each classification is
from a fioi te population; and the set of points from a factorial
experiment where each factor has a finite number of levels,
.th
. t in
At t 'ne l
- pOln
let
R ,
of a process in the region
cp( -."1
z. )
represent the true response
R at that point.
The vector of true
responses is assumed to have the form:
SQ =
~l
a
k X 1
(2.1)
C
Z1-1
+ Z~2
C
is a
PI X 1
vector of parameters, and
~2
is
vector of parameters.
The matrices
Z1
and
Z2
necessarily full column rank.
in the true model (2.1) are not
To facili tate the derivation of the
minimum bias estimator, the matrix
column rank.
Z1
is reparameterized to full
Such a reparameterization can always be done so that no
loss of generality occurs.
Whether or not
Z
"
c:
is of full column rank
7
is unimportant in tbe derivation of the minimum bias estimatof a;ld
therefore, no assumptions about the rank of
model with matrix
Xl
Zl.
Z2 are made.
The true
reparameterized to the full. column rank matrix
is expressed as:
Xl~l + X~2
SQ
where
5Q
= [cp(~~"l)'
l'
Xl = [x.
--1.._
g
••
X
[X l :X 2 l ,
E?
[e{ ~2J' .
,
••
0
~J1J'
~l
cp(x )J'
.,
-m
,
,
~
x.
.-
-~
(2.2)
a matrix of full column.
is a
p X 1.
,
i
raD..k
p
[x' ·x' -J'
-:il.-i2
vector of' parameters,
... ,
1, 2,
X = Z2
2
m,
,
~2 =.£2 ' and
In practice, the true model is almost always unknown and is
therefore usually approximated by a function that has fewer terms than
the true model.
terms in
Let the estimates of the p-parameters in
~l
be the
£1 of the approximating function:
(2.3)
Each estimator in
b
-1
is choosen to be a linear combination of the
observed responses of an experimental design.
*
R
is a subregion of
R
made up of the n-experimental points
and the vectors associated with
A linear transformation of the
chosen to estimat(~ E?l
2.2
The experimental design
C!:.· £',
R*
(n ~ m),
are labeled with an asterisks (*),
n X 1.
vector of observations
l*
is
*
£l = T'Y... ) •
Existence of The Minimum Bias Estimator
Two properti0s that are desired
ill
an alirraximating function are:
small bias in estimating the true model; and small variance
(!.. £.,
8
Var[~:l
).,
The criterion that is used to measure these two properties
jointly is the summed mean square error, which is normalized with
respect to the number of' points in the experimental design and the
variance of the observed response,
point in
R is represented
by
.
The observed response at the i
y(x.) =
-~
~(x.) +
-l
i = 1, 2, ... , m and the random variable
2
E(e .-)
~
=
2
a-
, and
o
E(e.e.)
l
J
for
i
'*
ei
e.l
th
where
E(e.) = 0 ,
has
l
The normalized, summed
j
'"y , is then defined as:
mean square error (SMSE) of
X.
-l
The SMSE is the sum of a bias term
E:
(2.4)
R •
B and a. varia.nce term V.
J =
Bias Term + Variance Term = B + V , where
and
m
V
=
n
ma-
t
2 . 1
(2.6)
Var[Y(x.)J .
~=
-l
The Karson, Manson, and Hader approach is used to achieve
acceptably small levels of
J.
First an estimator
£1
is selected
so as to minimize the bias term B , and then an experimental design
with an acceptably small variance term
V is chosen to satisfy other
design criteria desired by the experimenter.
Let
.§:.l "" E(£l) , then SUbstituting
the bias term
B becomes:
(2.2) and (2.3) into (2.5)
9
where
H..
The
matrices can be shown (see Appendix 6.1) to be:
H..
lJ
.1., 2
i
lJ
1
H.. =.- X!X.
lJ
m 1 J
for
i = 1, 2,
The first derivative of
and
arId
1, 2 .
j
(2.8)
j
(2·7) with respect to ..§:1 is:
Since the second derivative with respect to
'~l
is a positive
defini te matrix, minimum B is achieved by setting (2.9) equal to zero
to get:
(2.10)
Because the matrix
Zl
column rank matrix
Xl' the matrix
ordinary inverse.
in (2.1) is reparameterized to the full
is of full rank and has an
H
l1
Therefore,
(2.11)
Let
£1
be a linear transformation
from a conducted experiment, then
T
~1 =
of the observed values
TIl.*
where
T is a
X.*
n X p
matrix, and
*
* = T'(X1.
* :X*2)~E(b-1 ) = T'E(y
- ) = T'm
:J::
If
£1 is to minimize the bias term B,
=
T'X*_~..
E(b_)
-.L
(2.12)
must satisfy both
(2.11) and (2.12); that is:
T'X*~
= a~
(2.13 )
10
'This means that the matrix
is the
solutio~
to th8 following matrix
equ.ation;
*
'I"X
(2.14),
To solve
unique.
If
mX n
-
=
S
is a non-singular
SS S
The matrix
S
ordinary inverse; and if
S
lS
verse can be represented by
The solution to
(1969).
a.
(2.14)
the generalized inverse of' a matrix is used.
generalized inverse of a
such that
=
(2.14)
S- =
matrix
S
S
is a
n X m matrix
A
S
always exists, but may not be
mx m matrix,
S
-
_. S -1
,
the
singular, the ger.eralized in-
(3 / S)-S'
is given as Theorem 6.3.3 in Graybill
This theorem says that a necessary and sufficient condition
for the matrix
l'
to exist and satisfy
(2.14)
is:
( *)-x* =a,
aX
or
a(x' *X* )-X' *X* = a .
Condition (2.15) implies that
~l
experimental design
existence of a minimum bias estimator .if and only if
estimable.
l'
When
~
*
X
allows the
a~
is
is estimab le, the general solutio!'. for the matrix
is:
T'
where
* -)( -x' -)(-
a(x' X )
G is an arbitrary
-)(
-)( * -x' * ) ,
+ G(l - X (X' X )
p X n
matrix.
(2.16 )
11
Since
estimators
G is an arbitrary matrix, there are aQ infinite number of
*
Q.l
B.
that can minimize the bias term
= T/~
possible, the arbitrary matrix
If
G shoald be chosen to minimize the
variance term
V
and therefore
G, it is sufficient to write the expressions for
and
To show the variance term is a fanction of
'" .
Var(y)
* *)
E (~:::,,'
=
Icy
2
.
Therefore, the variance of
52* + e *
*
The vector of observed values of an experiment is
where
T,
Y:..
El
,
is:
Using the results of (2.17), the variance of the approximating
function at a point
X.
-l
E
R
is:
Var[~(x.)J
= -x!lVar(bl)x'
-l
l
-l l
SUbstituting
(2.17)
and
(2.18)
=
X!I~/T)x'lcy2,
-l
(2.18 )
(2.6),
V
-l
into
the variance term
can be expressed as:
m
V
The expression
(2.19)
n L: x ! r(T I T)x. , .
-l
-l.L
m
i=l
can be shown (see Appendix
equivalent to a trace (~.:::...,
Tr
6".2)
to be
= trace) of a matrix:
V = n Tr(H TIT)
11
With the variance term
V in
(2,20)
(2.20)
now as a function of
following theorem shows how to select the arbitrary matrix
order to minimize the variance term
V.
T, the
G in
12
Theorem 2.1
For the linear transformation
X* (X' *X* ) - X' * ) , where
variance term
T'
a(x' *X* ) -x' *
==
G is an arbitrary
V is smallest when
G
==
0
p X
+ G(I -
matrix, the
r1
(~'~')
V(e::::=: 0) s:
V(G:j: 0»).
Proof
Substituting the
term
T matrix in
(2.16) into (2.20), the variance
(~.~.
V as a function of the matrix
,
V(G) ) is the sum of
two traces:
=
t'1
V(G ) :=: m
Tr[X a ( X' **X ) X' **
X (X' **Xl
) a'X']
I
+
)
iiin Tr[Xl G(I - X* (x' **
X ) - x' * ) (I - X* (x' * x* ) -x' * )G'X.1
.
The matrices in both traces are non-negative definite (Graybill,
(1969», and therefore, both traces are greater or equal to zero,
cause only one of the traces contain the matrix
Be-
G, V(G :=: 0) s:
V(G :j: 0) •
We shall call the estimator having the matrix
£1
G
o
(~.~.,
*-l(._~**.
==
a(x
x ) x y ) the "minimum bias estimator",
To find the minimum value of the bias term
estimator is substituted into
(2.~()
B
the minimum bias
to give:
(2.21)
In the class of designs that allow the existence of a minimum bias
estimator~l
' the 'falue of
B
MS
is the same for eve:::y design.
13
From this class of designs, the experimenter is free to choose the
design for which the value of the variance term
select a design with an acceptably small
V is smallest or to
V using other design
cri teria.
If the ordinary least squares estimator
b
-1
==
(X'1*X*
) -1
1(' *Y*
1
1 -
is used instead of the minimum bias estimator, the value of the bias
term
B is:
(2.22)
M.. -
1: X.,*X.*
for i
1, 2 and j == 1, 2 are
n ~ J
B
does riot have a constant value
the matrices of design moments.
LS
where the matrices
~J
for every design and is at least as large as
for all designs.
B
MB
B ~ B
)
MB
LS
Designs that allow a minimum bias estimator and
also satisfy the Box-Draper condition will have
2.3
(~.~.,
B
MB
==
B
.
LS
Relationships Between The Minimum Bias Condition and The Box-
Draper Condition
The following theorem is useful in studying the relationship
between the minimum bias condition and the Box-Draper condition.
theorem by Rhode
(1965)
A
on the generalized inverse of a partitioned
matrix is used in the proof of Theorem 2.2.
Theorem 2.2
The minimum bias condition
(!.~.,
a~
only if the following two conditions hold:
estimable) holds if and
14
Proof
(See Appendix 6.3).
The following corollary is a consequence of Theorem 2.2.
Corollary 2. 1
If
X,*
is of full column rank and the Box-Draper condition is
.L
satisfied then the minimum bias condition is satisfied.
Proof
If
(I the
*
Xl
is of full column rank then
M~~ll)
Box~Draper
0
M
ll
is non-singular and
which satisfies condition one in Theorem 2,,3.
condition holds, then
(H~iH12
-
M~~12)
=
0
If
which
satisfies the second condition in Theorem 2.3.
Corollary 2.1 shows that those designs that satisfy the BoxDraper condition are a subclass of all designs that satisfy the
minimum bias condition.
For this subclass of designs, the minimum
bias estimator becomes the ordinary least squares estimator.
Theorem 2.3
If the minimum bias estimator
£1
Til*
exists then:
15
'Nhere
F
P.:cooi'
(See Appendix
6.4).
Theorem 2.3 means the minimum bias estimator is the sum of the
ordinary least squares estimator and another component that is the
product of the matrices contained in the Box-Draper condition and a
k X 1
vector.
In determining when the minimum bias estimator becomes the
ordinary least squares estimator, it is sufficient to compare the
variance term
V of the two estimators,
need not actually be computed).
the variance term
(~.~.,
the two estimators
Using the form of
T in Theorem 2.3,
V for the minimum bias estimator can be expressed
as the sum of two traces:
(2,23 )
where
R'
=
(H~tH12
..,
M~~12)D-X2*F
0
The first trace is the
variance term using the ordinary least squares estimator:
and the second trace is of a non-negative definite matrix.
trace terms are greater or equal to zero,
if and only if the matrix
R
=0
When
V ;;:: V
.
MB
LS
Since both
Equality holds
R = 0 , the minimum bias
estimator becomes the ordinary least squares estimator, therefore,
16
V = V
MB
LS
is a necessary and sufficient condition for the minimum
bias estimator to equal the ordinary least squares estimator,
Theorems 2.2 and 2.3 illustrate the additional flexibility
available in using the Karson, Manson, and Hader method over the BoxDraper method.
When an experimenter is using designs that satisfy
the Box-Draper condition, both methods give the ordinary least squares
estimator and therefore, the same variance term
V and bias term
B.
However, using the Karson, Manson, and Hader method, the experimenter
has the advantage of choosing from a larger class of designs while
still minimizing the bias term
B, and thus obtains greater design
flexibility to meet other criteria which an experimenter may wish to
impose.
17
30
3.1
2
r
FACTORIAL EXPERIME::I'.!.:'3
Second Order Model
When an experimenter is wor.ki.ng witt r-fa, tors each at two
levels, the region of experimentatina
E, c:oonsL;ts of the
m = 2
r
points representing all combinatior.s of the-=: higb and low levels of the
r-factors.
o-ver the region
R, a tru? response model consisting of
the mean, the main effects and t11o'~ tw(~·-fact,)r interactions (~:~., a
second order model) is approximated bJ a fi'.'st order function consisting of an estimated mean and estimaterl main effects.
To illustrate the notation to be used :n a general
experiment, consider an example with
si tuation, let
denote the overall m,:;an;
IJ,
( i == 1, ... , r)
r == 2
th
at the g - level,
the
19
and
g == 1, 2
r
factorial
In the general
factors"
T.
2
.th
l-
(Tij)gh
factor,
the two-
th
.th
~ ~'Jor-" a t ~h
factor interaction of the ith and J
- .Lac
'-,_le gth
- an d h - level.
J
Then for a
2
2
se
factorial design the t.:cue model;
-- Zl'£l +
Z~2
is
written as:
CPl
CP2
1
1
0
1
0
1
1
0
0
1
,
1
0
.L
.1
0
.1
0
1
0
1
r:ll
rr
.L
12
~-,
~~2.l
rp
J..
c2
1
0
0
0
(T 12 )11
0
1
0
0
(T l2 \2
C
0
1
()
(T 12 )21
bC'
()
0
1
~
(T12 )22
l8
In tbf'; matr::'x
meter in
c,.
IJ,
parameter
.L
Itf; first column vector
-.L
the mean
Z_ ) each column vector corresponds to a para-
~ll
thi'! sec:ond column vector
J
11'11) etc.
(T .. )
.£2
Z2
and the column vector
parameter is denoted by
L
lJ g!l
('7)
-=-ij g[l
columrl vector
is associated wi th the
SimilaI'ly) each colwnn vector in the
matrix co.rresporL18 to a farameter in
associated wi th the
_zO = 1 ) is associated wi th
(z .. ) J , . .
-:l.J gil
The
is derived by multiplying the cOrreSIJondi ng
-
z.
elements of the m9'::"n effect column vectors
-lg
and
z·h .
-J
elementwise mUltip.li cation of vectors is called a Hadamard product
(Margolin)
the
2
2
(1969b))
and will be denoted by
factorial example) the
product of
~ll
and
z. I:)z·h
-lg
-J-
For
(z'·)h"
-lJ g
vector is the Hadamard
~22
z
-11
~22
1
o
o
1
1
1
o
o
o
o
1
o
or
Using the Had91llard p.roducts) the true response model for a
?
2~
factorial example may be written as a linear combination of the vector
and the ·vectors that are associated wi th the main effects:
~o
se
I1Z
-0
+
L
2
L:
L.
"
"
i=l. g=l
T. z.
+
19-1g
2
2
L:
L:
g=l h=l
(T 12 )gh (~lg ~
~2h)
.
19
=-c~
matrix
tl:c,;-c tru.2 ;nodel,. the
X.l
uf' i'ull
Zl
matrix is reparameteri20ed to the
rank.
<'::011;;]1111
For a
orthogonal Y'epararneterization of
x.l
-
21'
= 20'1
l
factorial experiment, the
i
~l'?
_ '
-
= 1, ... ,
r
; is
used, ,so that for eactJ factor, the high level is denoted by a plus one
(+1)
8.r~d thc~ loW" .lE':vel is denoted by a minus or18 (-1)
The set of
repar9..llleterized colwnn vectors for a complete factorial
all
~y
~l
... ; ... ; ... ;
~
~l 0)
linearLy independ,,"nt and orthogonal.
vector of ones (~ . .£. ,~O =
lJ ,
x 1 •
-1')
These corwmn vectors are
The mean column vector is a
and the interaction column vectors are
the Hadamard products of' main effect column vectors.
vectors in the matrix
is a subset of the
X"
.L
The column
r
FD(2 ) column vectors
consisting of trw mean colwnn vector and the main effect column
. .,
vectors;
x ) •
~
Let tte parameter
-1'
F = Til - T
i
i2
Then. the repararneterized true model;
represent the i
.5Q
=
Xl~l + X~2
th
o
for the
factorial (-"xa.mple i.s:
CPl
1 +1 +1
CP2
1- T..:... -1
.,
F'
1
1 0 0 0
(T12 )11
o
1 0 0
(T 12 )12
o
0 1 0
(T 12 )21
+
'0
CP4
1
,
- .,,
tl
··1
F
2
0 0
o
1
(rp
factor.
)
\-12 22
2'-
20
£'a:;soria.l experimeEt; the second order model
:Fc.r' a gericI"'al
l:it~ear
may be wci ttw, as a
(z
.
-lg
t)
and
h
z . h)
.....-.l.o ••• ; r
t' C.2
)
-J ..-
i + 1) ..• )
j
I'
?S.i" and
?S.O)
g = 1, 2
1) 2
..'
!12
('ombination of tl1e \'f.;ctejrs
i1?S.0 +
·,.._1
1:
]1.
x. +
2.··'~'l
i=l
I'
2
2
1:
I;
I;
1:
i=.l j"-i+1 g=1 h=1
.
(T .. ) . (z. ::> z'h)
lJ go -lg
-J
(3.1)
The parameters in t"le approximating functi on for the second order
model in (3.1) ax::: 2scimateri from a fraction of the points in a
complete
21'
facterial design.
The construction of these fractional
factorial designs is descTibed in tr:e next section.
3. 2
Frac~ional
Facto"L'ial Designs
The second orde1:' model in (3.1) is approximated by a first order
function)
y == X* b
1'-1
A
consisting of an estimated mean and estimated
main effects:
A
y.
An
n X 1
A
*
--u
i1X~
I'
A
+
-;'t
J..-1
i=l
co11}_mn 'rector in the experimer:ta.l design matrix
i,~
formed from a. 81A.bset of elements in the
correspondin.g c:ol.cmn vector in the matrix
For each exp'2rimer,tal d,.::c:ign of
column vectors
*
t~':at
is ca.L1..ed
0
• • •_
a
In
,
••
u
x 'I
,
-I"
~l
\;)x
fractional factorial design
*
o -r
x*}
-2'
Si[~(;c
n:::;
x.l '"
n-points; there is a set of
.)1;
FFD(n) = t2.C. C) .~:~.'
points is
(3.2)
F.x.
I;
t.hr-: col.;J1lln vectors in
the number of experimental
.£i'FD(n)
are not linearly
21
independent unless
n = m.
facto.~'ial
Fractional
designs are C0n-
structed tJy specifying Linear relationships ttat the column vectors
of
FFD(n)
must satisfy,
Two column vectors
x*
and
*
) f or
a
aliased if'
x
*
2:
taken to be + 1
a';l
in
21'
*
in
';l
> 0
YFD(n)
are said to be
The alias scalar)
factorial experiments,
2: a )
is
Fractional
factorial designs in this dissertation are constructed by specifying
the column vectors in
YB'D(n)
*
~O=l:)'
vector (~'~')
that are aliased 'iii th the mearl column
These column vectors form
all
relation set (IDR) and this set is denoted by setting
identity
"III
equal to
the letters representing the factors or interaction of factors that
*
are a.liased with
~o
- ABDE means the following column vectors in FFD(n)
aliased:
*
x
-4
x* =
-5
<:;)
~*
1
(:J
*
-~
* ~ ~3* = ~)
* ~3*
~2
.
(:J
*
~4
(:J
Each column vector in
* =
~5
~l
CDE
I = + ABC
For example the IDR ,
*
-~ ,
and
are
x*
-1
(:J
x * (;)
-2
IDR is called a word and
the Hadamard product of any two words forms a word that is also in the
IDR
A set of words in an
IDR
that are not the Hadamard products
of each other, generate the remaining words by taking the Hadamard
product of all possible combinations of these generators.
of s-generators)
of 2
s
- 1
2
s
- s - 1
words in the
be specified to define an
From a set
words can be generated to give a total
IDR
IDR _
Therefore, only the generators need
Two
IDR I s
~vi th
tht; same
generators, but with different alias scalars are said to be in the
same
IDR
family.
Once an
IDR
column vectors in
is defined .• one can compute all of the aliased
FFD(n)
by using the 'prop(~rties of the Hadamard
22
*
*
product,
For example, if
~l
*
then !l
is aliased with
* ~ ~3"*
- ~2
~2
c:J
*
IDR
x * c:J x*
(since
-1
-1
=
by ~l ' we get
1 )
-
x*
and
generators partitions the
(0)
-
-1 -
FFD(n)
where each class is the set of the
aliased.
is a word in an
IDR,
because by mUltiplying both
7(-
sides of the
*
!o
~3
c:J
x
-)(-
-1
x*
x
:::J
= -
-0
*
-1
u
*
~2 e> ~3
x* .
-1
set into
2 '"
*
(~
An
r s
2 -
=
*
~2 (;)
!3*
IDR wi th sequivalence classes
column vectors that are
If the word length of a column vector in
FFD(n)
is
defined as the number of main effect column vectors in the Hadamard
product that are used to define that column vector, then the column
vector in each class with the smallest word length is used to
represent the equivalence class.
equivalence classes by an
IDR
The partitioning of
FFD(n)
into
is analogous to partitioning the set
of integers into equivalence classes using modulus arithmetic.
With
the Hadamard product as the binary operator, the equivalence classes
form an algebraic group with the class containing
*
~o
being the
identity element of the groupFor example, the
1-1
Z"
)
I = AB , partitions the set
IDR,
FFD(n =
* ~l'
* ~2'
* ~3'
* ~l* (;) ~2'
* ~10
* ~3'
* ~2c:J
* ~3'
* ~10
* ~20
* !3}
*
= {!o'
into four equivalence classes where each class is represented by the
column vector with the smallest word length:
*
*
*
-J'-
(!y !l (;) ~2 0!3}
23
The multiplicati on tal: le of the group that is formed by these
equivale:lce ':lasses 8.'1d tf.1F:3
table of letter
(~orresponding
represe:ltati ODS a.re as follows:
~
+.-
7(
~~Cl
2.i:.l.L
-x
*
7(-
~l
*
~3
*
~3
*
~l
<;)
*
~3
x,*
-.L
~
*
~l
*
-lE-
~
~3
*
~3
*
~l <:J ~3
~n
v
*
~O
~
*
~
I
A
*
~
~3
*
I
I
A C AC
~3
*
A
A
I
*
C
C AC I
*
AC
~l
"*
~3
~
~l
*
*
"*
x
-1
*
~3
-..L
*
2.i:. 1
3
x_*
~O
2.i:. O
2.i:."*
~l
~O
I
('
v
AC
AC C
A
AC C A I
For O1'.e to corlstr...lct an experimental design using a specified
IDR _' he needs only to determine the main effect column vectors.
Once the main effect column vectors are determined, the interaction
column vectors are the Hadamard products of the main effect column
vectors while the mean cO.lllffirl vector is
*
~O =
1:
An
IDR
wi th s-
generators places s-restrictions on the r-main effect column vectors
and the remaining
Tltprefore,
t.rh:;
r
- s
main effect column vectors are unrestricted.
main effect columrl vectors of a complete
factorial design Carl be constructed and the remaining
r - s
s-main effect
colwru; vectors are defined using the
IDR.
In this manner, a
frac;tional factorial design wi th
2
r-s
points (2:.:~.) FE'D(n)
n
FFD (21'-s) ) can bF' cons trncted from an
IDR with s-generators.
24
If the number of points in a
design is
n = 2
r-s
regular fraction.
(~.. ~., K
K2
fraction of a
21'
factorial
then the design is called a
= 1)
Regular fractions can be combined to form
fractional factorial designs with
, .. )
-s
n
= K(2 r - s ) points (K = 3, 5,
which are called irregular fractions.
The column vectors for
the main effects of an irregular fraction with
r s
K(2 - )
points are
constructed by combining the main effect column vectors of
regular fractions from the same
IDR
K
family.
Regular and irregular fractions are classified by the order of
the effects that can be estimated from the design.
If main effects
are called first order effects, two-factor interactions are called
second order effects, etc., and those effects that are assumed to be
zero or near zero are called negligible, then a fractional factorial
design is of even or odd resolution (John, (1971)) if:
Order of
Estimable Effects
Resolution
Order of
Negligible Effects
2t + 1
t or less
t + 1 or greater
2t
t - 1 or less
t + 1 or greater
For regular fractions, the resolution of the fraction is equal to the
length of the smallest word in the
3.3
IDR
0
Approximating A Second Order Model With A First Order Function
If the estimator
£1
in the first order approximating function
(3.2) is to be a minimum bias estimator, a design
so that
G~ = (I:H~~12)~ is estimable.
*
Xl
must be chosen
Since the matrix
Xl
is of
25
full ra::k and i s
ortllogo~1a.l,
the identity matrix of size
tlle matrix
P
==
The column vectors o£ the matrix
column vectors of
Xl
Hll
~
==
XiXl
==
r + 1 , and the matrix
Zl
~
mI p == I p
H
12
is
can be expressed in terms of the
by the relation
z.
-:1.g
==
(_l)g+l[x. + (-1)g+11J!2 ..
-:1.
-
Therefore, the interaction column vectors are:
Using expression
(3.3)
elements of the
H
12
1
' (z
-m -x O
.
-:1.g
e
~ld
the properties of Hadamard products, the
matrix are:
z. h )
-J
41
for every
i, j, g,
1
-x'(z. ez.)
m -p -:1.g
-Jh
and
if
p
i
if
p
j
h,
otherwise
Therefore, the estimable parameters,
the second order model (3< 1) are:
Gf.
==
(1: H~iH12)~
(3.4)
r-l
.r /-l+';:: L:
I
2
r
"I
26
2
1
L:
L:
L
(~iJ')gh
4 :i =1 j=i+l g=l h=l
F
+ 1
l
1+
r
2
2
L:
L:
(-1)
L:
g+l
(T
j=2 g=l h=l
.) "
1
-J
gel
1 p-l 2
2
L:
L.
L:
i=.l g=.l h=l
4
Fp +
(3.5)
If an experimental design allows a minimum bias estimator) there
must exist vectors
a
aJld
b
)
-p
p = 1, ... , r ;
S0
that the
-l(-
expected values of the vector of observed responses
y..
)
fyom the
experimental design are:
* ) = /-l
E(~/;Y:
+-
1
r-l
r
2:
L:
2
L:
2
L:
(T .. ) .
4: i=.l j=i+l
l,J gr.:
g=l h=l
and
F
1
+-
'4
rJ
+
r
L:
1 p-l
2
L.
2
, +1
(_1)[1
4 i=.l g=l
h=l
L.
L.
2
2
L:
L:
-'T'
\ -~j
)
'gil.
j=p+l g=l h=l
From
(3.6), we also have
*)
E(~/;Y:
)I;-
~ I SQ
)
'~l'
,
(3.6)
27
,
*,
)
E~~'.i
+
2
I:
I:
i=1 j =oi +1 g=1
r
J.'-.1
I:
2
I:
t~=l
*
a' (7=:ig
* ) (1'it~J )gh
~)
Z'h
--,J
.. 1 \
If (3.6) and (3.8) are equated, we get the f()}_.'...mfingider~tities:
a'x*
- -0
*
0,
*
=
a'x.
-1
-
*
~'(~ig
(3,3),
Using
~jh)
(i)
for every
ard.
i, j, g
(3.9)
the la~t identity in
o
Lemma
1, ... , r
i
'41
1 ,
f' or every
and
1::..
implies:
and
i
(3.10)
j
3.1
x* must be
If a minimwn bias estimator exists, then the vector
x*
linearly independent of the vectors
*
(x.
-1
~
*
x.)
i
-J
=
1, ..• , r - 1
q
-q
and
j
=
i
1,
=
r
'-0
; and
+ 1) ... , .:. ,
Proof
SUPPO,s8
*
(x.
-1
*
is a li!lear combination of tl"J,e vel:tors
~O
'l(-
._oJ0)
\:) X
i
the there exist c:onstants
)
=
1, .
0
.,
r - 1 ,
j
= i
q
C
q
...
1,
r
r
1'-.l
I:
I: C x + I:
C.. (x."*
q-q
lJ '-1
q=l
i=l j=i+l
*
0
J
a.r:td
r
;
and
r ; Lot all zero such
+ 1,
that~
*
~O
..
x*
-q
~
*\
x.
-J
j
.,
28
Sin28 a
rnic-~imum
bia3 estimator exists, there exist:" a vector
satisfies (3.6).
which
Therefore,
l'
*.
a/x
- -0
a
C a/x
L:
q-
q=l
*q
:.~-l
r
L:
L:
+
*
-'J..
C.. a/ (x.
1J-
i=l j:=i+l
.
l\-
I,;) X. )
-J
(3.11)
The 'va1c:e of the left hand side of (3.11), is -cnt.~ be=~au.se the Yector
a
satisfies (3.9) and the value of the right hand ~~ide of C').11) is ~
beca~3e the vector
a
satisfies both (3.9) and (3,10).
the contradiction that
1 = 0
Therefore,
implies the statement of the lemma is
true.
A si.rni.lar result to Lemma 3.1 can be derived for t!"l.e main effect
x.* ,
column vectors
.•. , r
:= 1, ... , r .
i
-1
If'
, for
p = 1,
"'i '7)
t , we get the identities:
; is equated to ( ",.
b'x
-p-o
*
== 0 ,
b/x*
1 ,
q
1, •.. ,
-p-p
*
b'x == 0
-p-q
*
b' (z.
-p-lg
(:)
*
for
\
~jh)
and
l'
(-.l)g+1/ 4
for
j
p + 1..
b+l
(-1)' /4
for
i
1 ..
,-0
(3. 12)
q:l= p ,
. ....
'
... ,
.r
-
1
p
otherwish .
Again using (.3.3), the 1ast identity in (3.1.1) impUe~,:
b/ (x.*
-p
-l
<:)
x.*)
-J
o
for every
i
and
j
29
Lemma
3.~
If a minimwn bias estimator exists, then the vectors
p
1, •.. , r , are linearly independent of the vectors
q
1, ..• ,
j
::'+1, ... ,1'
and
I'
*
(x~
q t p ; and
-.1.
* , i
ex.)
= 1,
-J
x
*
-p
* '
*
x
-q
~o
... ,
I'
-
1
and
Proof
x*
-p
Asswne
(x.*
and
I'
~
with
is a linear combination of the
* , then there exist cons tants
x.)
(:J
~
q t p
and,
C
,
ij
i = 1, .•. ,
I'
CO'
C
1
j
-
*
vecto:~'s
~O '
*
x ,
-q
q = 1, ..• ,
q'
+ 1, ... ,
c= i
I'
not all zero such that
*
-p
*
C~O
x
I'
+
.E
q=l
qtp
C x* +
q-q
1'-1
I'
~
.E
1=1 j=i+l
Since a minimwn bias estimator exists, there exists a vector
every
p
b'x*
-p-p
For each
that satisfies
I'
C b'x* +
O=-p-')
.E
q=l
qtp
(3.7).
b'x* +
"v q-p-q
b ,for
-p
TIlerefore,
I'
1'-1
.E
L:
i=l j=i+l
-)(-
c.lJ-P
.b ' (x.
-l
~
*,
(3. 14 )
x. )
-.J
p, the value of the left hand ,sidt~ of (;.1)+) is one be-
cause the vector
b
-p
satisfies (3,12) and the -value of the right hand
side of (3.14) is zero because tee vector
and (3.13).
b
-p
Therefore, the contra.diction that
satisfies both (3.12)
1 = 0
implJ es that
the statement of the l",rn.'lla is true.
':'he condi. tion that there be no linear
dependfjncie,~:
involving main
effect column "vectors in thF:; subset of column vectors for the mean,
main effects and two factor interactions from the
FF'D(n)
s:~t is a
30
ne~ess8ory
and sufficient condition for a ::"ractional fac:to;:'ial design
to be of resolution IV (Margolin,
(196980)).
Ma::'golin al.30 shows that
the minimum Gumbel' of points that is needed fa!' a r',,;ol,nion IV design
wi th r-two le-vel factors is
"i'old-uver" design.
If
2r'
and the minima..l de;dgn must be a
is a power of two,
an orthogonal, resolution IV design and if
r
t.b~
~c,
minimal design is
net a 1'0\;,er of two,
the design is a non-orthogonal, resolution IV de;;ign.
Thel'e1'ore, for
designs that allow the existence of minimum bias estimatoI's, the
following theorem can be stated.
T=:eo:,:em
3. 1
Ii' a
frac~tional
factorial design allows the existeu:e of a minimum
bias estimator, then the design must be a resol..,ltiu(l TV d(,;sign.
Prooi'
Since a minimlllIl bias estimator exists, Lermna
3.2
states that
there can be no linear dependencies involving main effect column
vectors in the set of CO.L;;.mn vectors for the mean, ma.in
two 'factor interactions.
:~ffe,--:ts
and
Hence, the fractional. facto::ia1. design is a
resolution IV design.
All example of a fractiona..l fac:torial desigrl treat &.110',"; the
existence of a minimum bias estimator and bas t>'Je m:::'n:'m..un llll.rnbp.r of
points is a
2
4-1
design that is defined by
Since the word length is four and the
Cumb2f'
thf~
-:··{"T~
-'
I
o:!.-' fai~t:;o.t',"
.r
..........
~.- ~
+ ABeD
=
4
is a
power of tvro, this regular fraction is an ol'tht..igc,rl".'_) :,:'eso]x,ticn IV
design.
31
The converse of Theorem 3.1 is not true.
following ten point irregular fraction
5(2 5 -
For example, the
4 ) ~0
a resolution IV
design but does not yield a minimum bias estimate.:'.
Example:
A
B
C
D
E
+1
+1
+1
-1
+1
-1
-1
+1
,
-.L
-1
+1
-1
-1
_ 1
-1
-1
+1
-1
-1
-1
-1
-1
+1
-1
-1
-1
-1
+1
-1
+1
+1
+1
+l.L
+1
-1
+1
+1
+l
+1
+1
-1
+1
+.1.
+1
+1
+1
-1
+1.
However, Theorem 3.2 shows that the converse is true for designs
that are regular fractions.
Theorem 3.2
If a regular fraction is a resolution IV desigE, then the design
allows the existence of a minimum bias estimat.or.
Proof
To prove this theorem the assumpti ons are shown to imply that the
Box-Draper condi tion
in the
IDR
(~.' s:,' ,
H~?r12
==
M~~.l2)
told;:;,
has a word length of four or more .lette!'s, the mean and
main effect column vectors can only be aliased
I'll th
representing three factor or higher interac:ti ow:,
vectors in
Since each word
FE'D(n)
CCJ:Ulllil
vectors
Hence, the column
for the mea.'l, the main. effe::ts .• and tt:e two
32
facto:::' inter·'1.ct.ions are all orthogonal by coustrJ.::ticc..:::'h.8refo.re,
the matrix
M
is
MIl
.n
1
=.:...
n1 \~O'
f ..""
*
elements of
n --p
",.
~.l'
.•. ,
G
••
*
z
. ---:r'-1,2
,
(3.3), the
M
are:
12
* (z *
n -0
-1 X I
ar::.::l the
U3ing the Pl'op8:::·tie~o of the Hadamard f.cocl"<;c;cs aLd
c:J z 2)'
-r
1 Xl
.:::
*
XI X*
["11
~g
* (z.
*
-lg
1
(~
1+
i, ,j) g,
for every
*
~ z., )
-In
if
p
i
if
p
.i
h ,
aDd
otherwise .
By comparing each element of
see that
H
12
H
= M
•
12
12
(3.4)
vI~ tl!. C2ach element of
Therefore, the condition
is satisfied and so by Corollary 2- 1 a minimum bias
estimator exists.
Because regt.;.lar fractions of resolution IY sa.tisfy the BoxDraper condition, the minimum bias estimator becomi::E t."le ordinary
least squares estimator.
Therefore, the minimumlJias
te~m
minimized by using regular fractio.r.s that are res ,;J.'.lti or,
Theorem
3.2 pro-ddes an easy method for
fraction allows the
ex~stence
s:r.owin.gwhF~.n
of a minimmn bia.s E:'stimator.
length of the shorb.,:st wm'd in an
IDR
1/2, 1/4,
and
is
designs.
a regular
Since the
is 2Clualto t!J'j resolution of
a regular fraction, the maximum aV'erage word h;QgclJ. in an
be at least foal'.
1"';
B
IDR
must
For examplp, the maxj.mum a\!t;rageHord lengths for
1/8 l'ractio:~s
of
r
2
,
r:=
3,
h,
5, 6,1'.,
are listed in
33
Table
3.1.
From Table
3,1, we see that for seven
there are always 1/2, l/~t and
of a rninimmn bias estimator
1/8 :::'ractions
be~au8e
more factors
tccat a1..lCiw the: existence
maxim~un
the
O.t:'
Ci."reral!;e word length is
four or greater.
Table
3.1
Maximum a'TGrage word .length forregn1aI' £'I'actions
2r - s
No. of
Design
Points
8
-
1
I M~~c;~~~
I
---r--"7--r
r
:F'actor
2
Fraction
No. of
IDE
Words
cd' Wo;:'j
LCI.LgL11S
--t--1
V-8
1 -- · - - t - - - - .
3
4
6
"7
WOl'd
Length
1;'
1. 71
1/4
2
3
C
2.00
1/2
4
1
:.
3·00
1/8
2
1/4
4
8
'7
3
1/2
5
r2 s-l/ 2 s -1
Max Ave.
2.29
2.67
8
1
4.00
1/8
1/4
~.
'I'
2.86
8
3
3·33
1/2
16
1
1/8
8
'7
1/4
16
3
.!~,-
1/2
32
1
'_.J
1/8
1/4
16
32
7
3
3·43
)+.00
., f)
?
6.00
---~)81
4.00
!
4.67
I
,-"
~
n '-- •K(2 r-13)
1/2
64
---J~
' t scan
pOln
""
1
-Ji;. _ _._.
"I
~
I~L
7·00.
b e cons tTU: t ed.
irregular fraction can be constructed by combilLjn c./
t~,:',~e
(eg~J.Lu'
__'_
34
designs that are defined by the same
+ BCD
=0
then the
+ ABC.
IDE<
If the
I = + AD
!'a..'llily;
for each regular fraction is:
BCD
+ A.5C
AD
BCD
+ ABC
-AD
+ BCD
- ABC
Fraction 1:
I
+AD
Fraction 2:
I
Fraction 3:
I
3 (2 4-2)
IDR
+
irr8gular fraction is represented by the following:
IDR
Fraction
1 2 3
AD
+
BCD
AB,'-'
+
ri
+
+
+
For this example, four different sets of three IDE I B could ha.ve been
chosen to construct the irregular fraction.
In geEe::c'al, 'A"hich of the
s
2
(K) possible regular fractions from the same IDR farniJ-y are used to
construct an irregular fraction, does not affect the existence or nonexistence of a minim-LUn bias estimator', the value of tLe varia..'1ce term
v,
or the value of the bias term
B.
However, in order to have a consistent method of constructing
j
rregular fractions, Addelman I s method (1961) L;
different regular fractions from aXl
fractions that are studied in this
IDE
family.
dissertation
so Addelmar, I s method a3signs an odd DlJ.rnber of tLe
equal to
-1
for
IDR
USeJi
to choose the
."0::::'
K i:3
th'c irregular
Wi
odd number,
K a.lias scalars
words of odd length and an 8ven fll.L!11ber of the
K alias scalar's equal to
-1
for
IDR
woris of
e'n~E l'~ngtr..
35
th
F'o:':' the t - regalar fraction
construct an irregular fraction)
be the
CO"~i.mm
·
ac t lon
0
t == 1, •.. , K, t'"lat
let
X
-
"*"*
x'
"-J.. t
ot
J
used to
JoS
(,z,.*
ard
--lg
* )t
t) z.~
-J Ll
+h
vectors for the mean, the i ~ fao:.:to:':' and the inter-
tt
f'
.th
th
t
th
d h.:--the l
- and j-- lac o1's at the g - ar.'
le-,;els,
D
1'espectively.
"*
Xl
Then tte column vectors in the
m~d
X*
2
matrices
of an irregular fraction are
Using the vectors in (3.15), the
... ,
"*.
(z
-lg
M
ll
and
M
12
matri ces for an
irregular fraction can be expressed as the sum of the
M
llt
and
M
12t
matrices
M
ll
K
1
L:
n t==l
**
XitXl....
.
\J
1
K
L:
K t==l
M
llt
)
(3.16 )
M
12
1
n
K
**
.1
K
L:
M
X.hX2t == K t==l 12t
t==l
I:
Of importance is the class of' irregular frac:ticDs w'::.icn is
constructed by combining resolution IV, regular f'2."3.ctions.
Theorem
If
3.3
all
irregUlar fraction is constructed by ci)mtinicg regular
fractions of resolution IV, then the design allow;3 tte existence of a
minimum bias estimator.
36
Proof
As in the proof 0:::' Ti:'20;:-em 3·2, the
and the matrix
M
12t
*
*
n --pt -lg
1
1+
olE·
6
~jh)t
for every
if
p
i
r'+l
(-It" /4
'f'
l_
p
j
Consequently, the matrices M
ll
K
1
are M
I
H
= I
ll = K Z
ll
:P
t=l P
H~irtl2
i, j., g
r (_.1.)g+l/4
o
= Hl2 , so
m8.trix i
e•
M
llt
I
P
has tt.e elements
-;t
K)
( ~ x' (z.
6 ~.~)+
n '_·Ot -lg
Ju"
K
* (z.*
(-)x!
M
llt
and
~.,
otbcrwise .
,
and
M
l2
and
M
12
for the l.r-regular fraction
K
K
1
1
.- L: H
L: M
j:
K
rt
12t
K
t=l
t=l
-1
Note that H
it3 ttle
M M
H
matrix as
12t
12
lI 12
defined in (3.4) for the tth regular fraction.
Since the Box-Draper
condition is satisfied, Corollary 2.1 guarantees the existence of a
minimum bias estimator.
Furthermore, the minimum bias esbimator be-
comes the ordinary least squares estimator.
For seven or more factors there always exist reso.liJ.tio.'1 TJ,
regu.lar fractions.
and
Therefol'C 'I'.c.eroem 3.3 guarantees
that we can always combine these regular fractions to form
and
r
K(2 - 3 )
r 2
K(2 - )
irregular fractions which allo'w mLd.mLLm bia,'; estimators.
Regular fractions which have resolution .h;ss than IV may also be
combined to form irregular fractions that a 1.10 if mj'liml.;m bias estimators.
To demOnSijrate this fact, the estimabiE ty of G[).
using a
corrrputt~r
progr'am for
a.nd
wa."
exami.Ylr:::d
by
37
for
r"
3"
L~, ~)::t,ld
6
systema.tic:all,y acc'Jrding to their word
each
g~nl~rator
r-'::>
3 (2- -)
and
h~('gtrl
has ire common wi ttl the otter
t.o\tr many let tel's
geu~:'ato.r,':"
For the
il':>:'egu.lar f'ract::'oJls, there ace no des_Lgrl'; that al.low a
miniml1ll1 bia,3 estimator.if
V"_:<;
3 , and no "5 (2~ -')
r =
exist t.Y18.t allow a minim1.:Lffi bias estimator for
i'~':-:egular
fractions
r:s;
irregLl.lac fractions vrbieh allow the exis':-ence cf a llL"lciml-UTI b.i as
3.2
estimator are listed in Table
. 1 e..).
-J:
3.:tor
'
Tab
...-"\ (21'-2)
in Tables 3,2 and
cz
for
j
(Jj-;"
L
)
designs for
3.3 is classifif;d
the lengt,hs of the words
i[l
Each design
eN
by an N-:~uplr~
the defining
farni,Ly.
IDE
l'o:>:' ea,;h
design tLat allows the existence of a minimum bia:c; ,,=,,;timato:>:', the
values of the variance term
for the) minimum bia..s estimator and the
V
th<:~ tab.L~s.
least squares estimator axe listed in
two variance terms are equal, then t.he
minim~l.m
least f;qua.o:'es estimator are eqUivalent.
Ii'
thf~
values of the
bias e,:.T,imato,,: and the
TIle only
d'j"igr~
i.e.-. the two
tables where the values of the variance term;.; are 'CCClUs.l 1 s for the
(.
n
3(2)-L)
ct~sig~
(4,4)4)
fO"t;;.~,
defini.ng
IDR
satisfil:~S
the J::lox-Dl'aper cowl:' tioll.
family iE;
term
p
..'
soby
Th'':~()T<'m
hi the quadrati c form
3.3.
u
.__
l.J--
ttL, :iesign also
1...
.)
Ivhere
w.:~/·
Q is a LCH-c,_egati -Ie definite matrix
least ;3cpares estime.tor
Q/m
is
t; k18.t
dei::',le:~
is r;a,Ll~_d:-;\.e "b:"9.,; matrix l' •
in
38
Tab1~
il'.::'egu1ar fractl ons
;.2
r
Wactor
,
6
No. Of
Design
Feints
Design
Min.
Generators
D:!:' The rTir:.'
Bias
}i1amil.y
r
Min I~ias ~ Least Sq.
Va:'ia.J.'l.ce I
_~.J"",,,L
T",rm
Variance
Term
24
(1,1,1,2,2,2,) )
No
A,B .. C
24
(1,1,2,2,),),4)
No
A,B,CD
24
(.1,1,2,3,4,4, 5 )
No
A,B,CDE
24
(1,1,2,4,5,5,6 )
No
A, B j CD.EJ?
24
(1,2,2,2,3,3,3)
No
A,BC,CD
24
(1,2,2,3,3,4,5)
No
A,BC,DE
24
(1,2,3,3,3,4,4)
No
A,BC, Cf'E
24
(1,2,3,3,4,5,6)
No
A, Bi.;, DE::?
24
(1,2,3,4,4,5,5)
No
A, ~C) CD,EJT
24
(1,3,3,4,4,4,5)
No
A,B:;D,DE:B'
24
(2,2,2,2,2,2.,4)
No
P.3, EC,":D
24
2)+
(2,2,2,2,4,4,4)
No
AE .. BC.,DE
(2,2,2,3,3,4,4)
No
AB,BC, CIJE
24
(2,2,2,3,5,5,5)
No
AB, Be, D?~I<'
24
(2,2,2,4,4,4,6)
No
All, Be, CDEI;'
24
(2,2,2,4,4,4,6)
No
Jill) CIJ , :r~lil
24
(2,2,3,3,3 .. 3,4 )
No
AB) CJ,.ACE
24
(2,2,3,3,4,5,5)
Yes
!ill, (~D, AE.B'
9- '{50
7·500
24
(2,2,4,4,4,4.,4)
Yes
AB,C:J,ACE2"
8·000
7·500
24
(2,3,3,3,),4,6)
Yes
AJ3.., BCD,~ ..A~E51
10, .::"25
7·250
~ (2,3,3,3,4,4,5) Yes
24
(3,3,3,3,4,4,4) , No
AB,CDE,AC2"
9.3'15
7·250
'-
,
I
I
A"'-B"V} CDh' AD'"
---L,J.
~)
"
_
0,
I
39
Table
factor
4
5
6
3.3
3(2r - 2 )
No. Of
Design
Points
i::-regular fractions fo.!'
Generato.ts
Of' 'I'Le IDR
Design
Min.
Bias
12
12
12
12
12
12
(1,1,2)
(1,2,3)
(1,3,4)
(")
'))
'-J c.;;,
(2,2,4)
(2,3,3)
No
No
Yes
No
Yes
Yes
A, B
A, Be
A.• BCD
AD, Be
AB, CD
24
24
24
24
24
24
24
24
24
(1,1,2)
(1,2,3)
(1,3,4)
(1,4,5)
(2,2,2)
(2,2,4)
(2,3,3)
(2,3,5)
(2,4,4)
(3,3,4)
48
48
48
48
48
48
48
48
48
48
48
48
48
48
48
48
(1,1,2)
(1,2,3 )
(1,3,4)
(1,4,5)
(1,),6)
(2,2,2)
(2,2,4)
(2,3,3 )
(2,3,5)
(2,4,4)
(2,4,6)
(2,5,5)
(3,3,4)
(3,3,6)
(3,4,5)
(4,4,4)
24
\~J
')
.r
Family
4.,
5,
and
Mi.YJ. Bias
Val'iar,ce
Term
6
Least Sq.
Variance
Term
6,'r;0
5·250
Pill, BCD
6.000
'i . .l25
)·500
5·250
No
No
Yes
Yes
No
Yes
Yes
Yes
Yes
Yes
A..
A,
A,
A,
B
Be
BCD
BODE
AB, BC
Pill, CD
Pill, BCD
AB, CDE
AD, BCDE
ABC., CDE
'7 0875
(:·750
6.250
6.250
'70000
8. .1.25
6·750
6.375
7·000
6.500
6.250
6.250
6.250
6.000
No
No
Yes
Yes
Yes
No
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
A,
A,
A,
A,
A,
9·000
7·875
7·875
7·250
7·250
7·250
8.000
7·500
'j.250
7·250
7·250
7·250
7·250
7·000
7·000
7·000
7·000
B
BC
BCD
BCDE
BCDEF
AB, BC
AB, CD
Pill, BCD
AB, CDE
Pill, BeDE:
Pill, CDE:B'
!ill, BeDEY
ABC, CDE
ABC, DEE'
ABC, CD:E~Fi
ABCD, CDEF
-
9·1~25
7',750
'r· 750
7·375
,_.,
'':''17'-
r·J i;)
8.000
7·7:-,0
'i-·37~)
',·.000
~---
40
ir: Tables 3.2 and 3.3 which yield a minimum
b:~a.s
matrices for the minimum bias estimatm:' are
g:i.VCL
fostimator, the bias
iL '.CaLIF::
6.1
(in
the Apperdix) and fer tee lea2t squares estimatGT in Ta:C.;le 6.2 (in
the Appendix).
To show the advantage of using a minimum bias es timatoT rather
than the least squares estimator, the SMSE ' .s for
t~_.e
minimum bias
estimator and the least squares estimator are compaxed fO!' two
examples.
:=
SMSE
Let
J
IB
:=
SM.SE
for the leas t
for the minimum bias estimator.
+1
squarec~
J
MB
ab:= the -rector of para-
-
-1
-1
+1
-.1..
+1
+1
-1
-1
+1
+1
-1
+1
-1
-1
+1
1
'2stimator,
A
for the following two examples.
Example
3.1
For the i.rregu.lar
3(2 5 - 2 ) fraction
(::2,4,4)
t!:.e SM.S:E: I
S
are:
')
(3/2o-'-)ab/Aa-t~ - 0.12) .
least squares estimator rather than the minimum b11:1.8 e:.:timatol'.
By
41
Vl~r,3'..:S t~le
grapLing
ab ' A.ab / 'T
2
(Figure 3..1) ,
nocl-r:.egati ve quad:."atic f,::->rm
see that the least
OIle caD
estimator shoald be used only if
the sampling variance
sqL:9.l'PS
hac a va.l:.-:e less than 1/12
ab' Aab
E"ven if the minimclIll
used in this si tuat::,ol1, the maximum increase in
bl&.;~
estimator is
is
JM..B
0.125., which is approximately 2% of the variance term V
only
However, if the least sq'u.ares estimator lS always used, tte increase
in
,J IS
over
J
MB
can be quite large and in .t'ae:t, is :wt bounded.
Example 3.2
'
1 ar
F OT th e lrregu.
_'3 ('j6-2)
c.
.
fraction
(2,5,5) ), the SMSEls are:
J
L8
... ,J
ME
=
(3/(i)ab / A.ab -
-
-
o. ~'-')'7 c;
j
•
Fur this example, the graph of
(Figure 3.2) is negative if the value of
':)
(J-
/8 ,
and the maxirm:un increase in
ever, the slope of
J
J
LS
MB
,Jt.1B
J[S - J
MB
J
ab Aa'.:'
'
MB
in Example 3.2 is
is }.".;s,s than
is
t-wi~:e
in Example 3. 1 which mea-rts that the increase
00375.
How-
th.e slope of
over
,lD
is tTNice as rapid as in Example ).1
All of the designs that allow a minim-em bias
est~mator
in
Chapter 3 are for a full second order model (3 . .1-) 1.o<:'.r,g approximated
by a first order function (3.1).
If addi tior:a.l k:.~()iJ'.1.2dg~ is known
42
5
-1
345
5
J
LS - J MB for the 3(2 -2) design (2,4,4)
2
Figure 3,1
10
C"
)
--~~--....,r----,-----r----r----.,..-
1
Figure 3.2
2
3
4
r.:;
"
_/ 2
5.1~/Aah.
(T
43
som~
about
of the two-factor interaction
term~;
so tb'2Y can be
elimirrated from the true model (3.2), tben desig:13 trlat allow
minimum bias estimator.::: may exist where they did not'?xist using the
full second order model.
For example, SUpPOS(--;
expc;riment is
art
conducted using three chemicals (:!:..~., a, b, anJ, c) each at two
concentrations and it is known that the cbemical fie" can not react
The true model is assumed to be
wi th chemicals flat! B.nd t!b t!.
cpo Ok = 11 + a. + b
lJ
l
0
J
+ c
k
+ (at.)
0
0
lJ
i, ,i
for
and.
k
=
1.< :~ , but the
chemist wants to use the simpler approximating furlction
basing the estimators on a cJix point design.
"7_,..)
For the full second order model there are no
3 (2..-1 '-)
dc:signs that
allow a minimum bias estimator, but by eliminating two of the twofactor interactions minimum bias estimators do exis!'
the following
c'Ol.'
irreg'L11ar fractions:
Design (1,1,2)
F'raction
:z
1 2 ./
IDR
A
+
C
+
AC
+
+
+
Design (1,2,3)
Design (2,2,2)
~'ra'.::tion
IDE<.
1
A
+
BC
+
ABC
+
2
3
IDRi
+
AS
1/]
.. ,
N'
+
The 8M..SE for these three designs are listed in
of
- T
J
'L8
"'MB
8MSE I
A
S
and
given in FigJ.re 3.3 indicate bow
Note that if tee
AB
~J'.le
j
u::~i!1g
estimation is much more readily apparent for the
than for either of the other two designs.
+
+
The graphs
dlf:rer'enc,,"s in
nterac:tion between
interaction has
being non-:legigible, then the advantage of
+
I
Tab.l(~ =~.L.l.
increase vll th the increasing size of the
B.
.
}l'raction
.l 2 3
m~J.::;h ';}1a.Dce of
minimum bietS
'-, 2 ., .0;)
\.:-J
. '.
-iesign
44
7-
Ta1:' 1e
Design
'")
The ~;MGE of 3(2~>-t:-) desigIlc, !'GJ' t,!::,,, tr'--,c~ modR1:
'Pijk := IJ. + 8.i + b j + c k + (arJ)ij
3. 4
I - Estimator
---I_._------
(1: 2,3)
Least Sq'...:al'cs
=r
tT
T
J
Minimtilll Bias
.J
Least Squares
Jr
MiLimu..rn Bias
d
Least Squa:r-es
()..".
r)
(1,1,2)
(2,2,2)
(lO/;'~'- )a.b' A~E.. +
,
"
Because Figure
J
2
(, l.'~/3'J )~:::_'Aab + 5.500
T
Minimum Bias
;"-';/:-r2\,-,1-'!~c>1-, + c; 62'5
\_'
MB
J
3.3 doeE: not give an indication
levels of the SMSE' s: the indi\ridual values of
each of the three desigLs in :Figure
3.4.
5· 500
/,::::.:'
.'::::.::.
/'
tb.e absolute
(c,"
8.r-,:' plot ted for
.j
Adopting tt~f: SM,.':;,l;~ criterion
to choose the "best" of tee three designs, for m:l.r:irrr'Am bias
estimation, only the variance term
r:eed be considered since all
term
B
(~.~., the
J
(1,2,3 )
the
tb!~ :;8Jll'J
designs yie.:"d
liLes,have identical ;31()['t:-~s)o
MR
smallest variance term
threi'~
v
V
for the
JMB's
is
J
ME
's
bia,s
J,;ecause the
,-
'.
it is tte oI,tima1 choL~ewhich is:"ont.n:tey tc th,,, ,;i tuation
where ordinary least squares estimation
TLe
used.
1,:;
sM-,l;r~
J
f"
for
.l.DJ
design (1,2,3) has a sma.Ue.:>::> val'lance term
·
(--.1, 1 .' 2' ,) anu,--J
d eSlgns
estimation.
(1,2,3)
(2, 2) 2).
th,a..c.:.
I
U T ':':1
.ut-!
for the
(nc..,c.,
'1 'J'I
,- I'
However, t~::.e bias term
is l!:U'ger th8.!~ the bias term
(':::'2':')
j3
B
'l~r~,; ;l.o~;r:;~\ :!.'ur' design
1'0:'.'
In the preSe!lCe of a nor:-nt~gligib,l,e
tv"
:.,;:~t':'~'8,ction,
the
10 _
5
f
0
21- - -.----
()" or: \.2...z:,Z;
(',.1- c;.J. - -
____
De~ge:.. ~;;
~='..,,...::::.;,...---~----.,...----,.-----.,..__
3.3
J L8 - J
~""k
lJ
MB
=~
of
ab I Alib/ (f<::
5
2
Figure
....
f")
3 (23 - 2 ) designs for the true model:
+ a. + b. + c k + (ab) ..
l
JlJ
46
30
25
20
15
.
~-~"
-
.............
1
Figc.re 3.4
3
2
3-2 )
The SMSE's of 3(2
./ v2
I "",,",.......j,.j
/\ !"l h
designs fo!:' t:-lf; tr~le model:
-
IT
larger bias term (1.35 times as large) wo~ld make the design (1,2,3)
less appealing than either of the other two designs when ordinary
least squares estimation is used.
48
4.
SUMMARY AND CONCLUSIONS
'Ehe Karson, Manson and Hader method of mininr,.illl bias estimati on
has been applied to the classical experimental design model with
factors having a finite number of levels.
:For thh: situation, the
minimum bias estimator gives the minimum bias term
which, in general, is smaller than the bias term
squares estimator.
The variance term
V
B
B
for the
SMSE
using the least
for the miniffil.illl bias
estimator is greater or equal to the variance term
V
£'01'
the least
squares estimator, and equality occurs if and only if the two
estimators are equal.
The minimum bias estimator does become the
least squares estimator on the subclass of designs satisfying the BoxDraper condition which are contained in the class of all designs
allowing the existence of a minimum bias estimator.
For
2
l'
factorial experiments, fractional factorial designs
were investigated when a second order model was approximated by a
first order function.
A sufficient condition for designs to allow the
existence of a minimum bias estimator is that the fractional factorial
design must be of resolution N.
1"01'
regUlar fractions,
rc~solution
IV is also a necessary condition for the design to satisfy
t~e
Box-
Draper condition and therefore, allow a minim1.J111 bias e,stimator.
IrregUlar fractions that are constructed by combini.ng resolution IV,
regular fractions also satisfy the Box-Draper cor.tii tion.
However,
examples of irregular fre.ctions that are constru.cted from regular
fractions of resolution less than IV and allow tbe existence of a
miniml.un bias estimator are given for
6
3(2 - 3 )
ir.l~(:;gular
f:i.'actions
50
Adde.lma.'1 .• S.
1961. I:r.~'eg"21ar i'ractiO!IS of the ;:/- :f'a.ct,;rial
experiments. 'l'edl1lOmetrics 3 :479-49::.
Adde.lmalC..• S. 1972. R.p.(;2r~t d.feNelopments in the d8sign of factorial
experiments. Jour. Amer. Stat. Assoc. 6 ~ .1.C3-.11J_.
1
{
Box, G.E.P, and .lLR. Dl'a:r:e::-.
1959· A basis for
resp'.)l1se surface design.. Jour. Amer. S'tat.
Lt.c;'t~lect.ion
ltS,sl,JC.
of a
:-;4-:622-654.
Cote, 0., A. R. Manson, ar.:.d R. J. Hader. 1973, Mi:-:imc;.m~ias
a.ppI'oximation of a ger"eral regression model. cTo"r. Arne!'. Stat.
Assoc. 68:633-638.
Graybill, F. A.
Volume I.
1961. A'1 IEtroduction to Li:J.:.~ al' Statistical Models,
McGraw-Bill Rook Company, Inc." Nefti' York, New York.
Graybi1.l, F. A. 1969. IntrGdnction to MatrL::es Wi ttl Applications in
Stati.stics. Wadsvrorb. Publishing Company, BelmoEt., California.
<John, P. W.M. 1971. Statistical DesigrL ar:.:i Analysi:; of Experiments.
T~le Macmillan Coml1a!ty, New York, New York.
Karson, M.J., A.R. Manson, and R.J. Hader'.
1969. MinimmTl bias
estimatior. and experimental design for response 8u!'faces.
Technometrics 11:1.~61-4'T5.
Margolin, B. H.
196913.. Results on factorial designs of re,'301ution IV
r
for tte 2'" and 2·'13 m series.
TechnomEc;tric;s 11:431- Lr l.f4.
Margolil'.} B. H.
1969b. Resol'J.tion. IV fac:tional f'a r ;toX'ial de,signs.
~rour. Royal stat. Soc. B, 31: 514-523.
~e.lsoc.
.1.97.1. Ivli.':"im·JID bias
estimation in the mix:ti.:cre problem. Mimi';ogl'a:ph S\~.ri~;.s no. '757,
Insti tute of Statistics, No:.'tt Caroli.c.a Seat.'~ Ux::L-;cc!'si toy, Raleigh,
N(Jrth l;arolina.
PakLl.• G. A..} A. R. Ma'1son, and 1. A.
Parist., R. G. 1969. Mirdmlill bias approximation of models by polynomi.als of low ordero Mimeograph Seri.es n.n, r:,27 , I:~sti tl~.te of
Statlstlcs, No::·t:~; Ca:"01ina state ULL-(;x'sity, Ea.l:::ig:v . , JlJ0rl.:;h
Carolina.
Rao, C.:'\.
<for~l
.197.L Ger~",ra.liz,-::d Inv·:;x'se of Mat:::':5_c(~~ eLd !::;::: Arplic:ations.
Wiley and Sons) 1::.:. 1 :.:- New Yo!'k.. l'kv,' York.
Rhode, C.A. 1965. Go.ne:t'aliz,ed irNerse:-~ of' pe.rt.i t:inn<i II18.trices.
Jour. Soc • .Indust. App:L MatL. 13 :1033-.LC)3:.
51
Searle, S. R.
19'{.l.Linear Models.
New York, New York.
John Wiley 8.(,-d Sons, Inc.,
Webb, S.R. 1968. Non-orthogonal designs of c~1en reso]:u.tion.
Technometrics 10:291-299.
53
The regional moments
are defined by
H
.L
•
=
•
lJ
... ,
1:
m
X'X
•
E..
lJ
J
x .
= -m
and
X.
J
m
= [x~.
-..LJ
H. .
l.J
for
can be shown
X,. x ' .
-m.-.h J
, ... ,
ar..d
= 1, 2
i
X.1.
and
X' . ] '
be equivalent to
tCj
-
X.
aY'C\
J
= 1, 2 , that
~
X~
l
.-
[~.li
the matrix product
,
-mJ·
j
,
X!X.
l
J
Therefore, the regional moments are
X!X.
l
L:
l.J
h=l
Since the matrices
••
l ,J
-IlL.. .
H..
m
1
J
1:m X!X.
1. ,J
The variance term
equal to
V
=
V in (2.19),
n trace (Hl1TIT).
I
"i.l~'~ll'
is
Because
'
.. ,
~ilT 'T~ml
I
Xl'I"TXf.
L~'r'~ll'
-1
(J
••
:'Tx
,
-ml
:.:.ml
x'
J
m
the trace (X1T/TXf.)
V = ~ Tr(X1T/TX1)
6.3
is
Tr(X1T'TXi)
=
D
~ Tr(XiX1T'T) , or
L: x! 1T'Tx. 1
i=l-l
V
=
Hence,
-1..
n 'l'r (H.llT'T) .
Theorem 2.2
The mir.immn bias condition (1. e.)
Q,~
estirr1'3.ble) bolds if and
only if the following two conditions hold:
2.
W~~.i.ere
is
Pr-oof
Let
Q
= Xi**
Xl
B- XI *X*
2 ·2
Xli*X*
0
.
s.nd
D
t...
By Rhode theorem (1905),
Q-
(XI **X ) (XI *"*
X )
C+Q~ CD -~' Q- ~-Q- CD-~
J
-D C/Q C+D B
then.
+
(I:H~~12)Q-Q =
(H~~H12
if
- Q-C)D-D].
(1:H~~12)Q-Q =
[Q-Q + (Q-C -
H~~12)D-C/(Q-Q
The minimum bias
condi+~icn
(I:H l1H.l 2 ) , or if and only if
- Q-C)D-C' (I - Q-Q)
- I):Q-C
tolds if and only
Q-Q +
(H~.~12
= I , and Q-C + (Hii:r12 - Q-:-;)n-D= H~.~12'
Or,
if and only if, the following two condi tions hoU;
6.4
Theorem 2.3
If the minimum bias estimator
T' ""
(xtx:)-~t
+
(H~~12
-
£1 "" 'I'll.* exists} then
M~~12)D-X:/\F w~ler";
*( * -,,")-1 __ *,
F = ( I - XI,Xi Xl -Xl)' and
D = X
* "*.
2 l"X 2
Proof
Sincr; the minimum bias estimator eX-lst,
singular matrix.
If'
B =X
then the valli':' of (X I *X*)'-
D.
**
2 X2
'
C = X{*X*
2
**
Qc X~ Xl
ac:.d
can be expressed in
b
a non-
1J "" .B - C/Q _I·-C ,
t(~Y'ms
of
Q,B,C, and
'55
-1 ('T)
1 1-l(- - QQ'
J~ - C'Q - A'
T'
J'
*.
1 v~ - -~
-'-("i
'I'
- D_c' 'Q -',~, * + .D-X' *
.L
'r)
T'
For the bias matrices in Tables 6.1 and 6.2, the fo11m-.ring
notation is used.
a matrix with the submatri::::es
diagonal and the remaining
('J"lJ •••
f~l.emeet'3
Matrices:
+1
-1
-1
+1
-1
+1
+1
-1
-1
+1
+1
-1
+1
-1
-1
+1
A
22
B1
2
9
-6
r
-Ci
+5
-6
+6
+5
_
c:-~
"
-6
+5
+5
-',
,
+6
-5
-)
+5
,:)
B2
-6
... l:~
r
-C)
-t-5
7)
-~)
-6
+'j
'+5
.....,..; t
+6
-5
-~i
FJ
'-
9
ttlI
36
I
J
GN
en the
are zero.
B3
=
-2
2
C =9
Table 6.1
Design
38
-6
-6
+6
-6
+0::.J
+5
-5
-2
B4
9
Constants:
56
a
9
/'
~l
24
-6
-0
-6
+5
+5
-:)
-6
+5
+5
-5
-6
+5
+5
-5
+6
-5 -5
+5
+6
-5
-5
+5
-1
+1
+1
-1
+3
_"'.
-"
-3
-I-_'Z
0
0
0
0
-1
+1
+1
-1
0
0
0
0
-1
+1
+1
-i.
0
0
0
0
+1
-1
-1
+1
b
= '9
d =
'2 '
10
= -,
9
1
1
D =9
'
c =
~
2
,
9
and
'/
e
1
2
Bias matrices Q for the minimum bias estimator
Minimum Bias Matrix
3(24-2)
DIAG(A,A,A,A,A,A)
3(2 5- 2 )
2 DIAG(A,A,AJA,A,A,A,A,A,A)
3 (26 - 2 )
3 (26 - 3 )
4 DIAG(A,A,A,A,AJA,A,A,A,A,A,A,A,A,A)
4 DIAG(A, A, A, A, A., A, A, A, A, A, A, A, A, A, A)
57
Ta1)le 6.2
Bias matr:Lces Q for the least squar\"Os H'c,timator (the word
lengths of the IDR which identify th,e, design.'?, are listed
in parerltheses under each matrix)
3(24-2) Designs
bA
aA
aA
bA
aA
bA
A
aA 'bA
A
aA
A
aA
bA
LA
t
aA
(2,2,4)
(1,3,4)
cA
A
A
aA
bA
sA
bA aA
bA
bA
aA
aA
LA
aA
(2'\ ':1;)
-J". , . /
3(2 5- 2 ) Designs
(1,4,5) - 2 DIAG(aA,aA,aA,aA,A,A,A,A,A,A)
(2,3,5) - 2 DIAG(aA,A,A,A,A,A,A,aA,aA,aA)
(2,4,4) - 2 DIAG(aA,A,A,A,A,A,A,A,A,A)
A
bA
aA
LA
aA
A
A
aA
bA
2
bA
aA
bA
aA
aA
bA
,
A
2
A
aA
A
bA
A
LA
aA
a.A
A
Ii.
A
A
(1,3,4)
(2,2)4)
58
(Continued)
aA
aA
aA
bA
aA.
bA
A
bA
A
2
bA
aA
A
aA.
LA
aA
, 2
aA.
A
A
A
cA
aA
A
(2,3,3)
aA.
A
bA
(3,.3,5)
3(26-2) Designs
(1,4,5) - 4 DIAG(aA,aA.,aA.,aA,aA.,A,A,A,A,A,A,AJA,A,A)
(1,5,6) - 4 DIAG(aA,aA.,aA,aA,aA.,A,A,AJA,A,A,A,A,A,A)
(2,3,5) - 4 DIAG(aA,A,A,A,A,A,A,A,A,aA,aA.,A,aP.,A,A)
(2,4,4) - 4 DIAG(aA,A,A,A,A,A,A,A,A,aA,aA,A,.aA,A,A)
(2,4,6) - 4 DIAG(aA,A,A,A,A,A,A,A,A,A,A,A,A,A,A)
(2,5,5) - 4 DIAG(aA,A,A,A,A,A,A,A,A,A,A,A,A,A,A)
(3,3,6) - 4 DIAG(aA.,aA,A,A,A,aA,A,A,A,A,A,A,aA,aP_,aA)
(3,4,5) - 4 DIAG(aA.,aA,A,A,A,aA,A,A,A,A,A,A,A,A,A)
(4,4,4) - 4 DIAG(A,A,A,A,A,A,A,A,A,A,A,A,A,A,A)
aA
59
Table 6.2
(Continued)
bA
aA
bA
aA
bA
aA
A
aA
A
aA
bA
bA
aA
bA
bA
aA
A
4
aA
aA
A
, I!-
A
A
bA
cA
aA
A
A
A
A
A
A
A
A
A
A
(1,3,4)
laA
(2,3,3)
bA
bA
II
A
A
II
A
!
A
aA
A
A
A
4
l
, 4
A
A
A
A
aA
aA
aA
A
bA
aA
60
6.2
Table
(Continued)
3(26 - 3 ) Designs
rB1A C
C C D
C
B2
dC
A
A
A
C'
C'
bA
aA
A
bA
aA
A
A
A
A
4
c'
C'
D'
-
D
,
aA
A
4
A
A
aA
D'
aA
aA
A
A
A
A
A
A
A
A
dC'
cA
(2,2,3,3,4,5,5)
A
(2,2,4,4,4,4,4
61
Table 6.2
(Continued )
-c
-C'aA
bA
bA
aA
bA
bA
bA
3C'
bA
C'
l
C C dC
A
A
bA
aA
Ci
B4
bA
aA
3C'
-------
dC
bA
aA
4
3C C dC
3C C
B3
bA
aA
-C'bA
aA
bA
aA
A
aA
,
aA
A
4
bA
aA
eA
dC'
C'
bA
aA
C'
A
aA
aA
dC'
A
eA
bA
A
A
dC'
A
eA
(2,3,3,3,3,4,6)
aA
A
(':)3"'-"'44"')
'-,
J-/','..-/'
" .".,J
© Copyright 2026 Paperzz