•
•
•
PROTECTING MAIN EFFECT M)IlELS
AGAINST iWO FACTOR INTERACTION BIAS
IN FRACTIONS OF 2 r FACTORIAL DESIGNS
by
CHARLES KENNETH RAYNE
Institute of Statistics
Mimeograph Series No. 919
Raleigh - March 1974
•
•
iv
TABLE OF CONTENTS
Page
. .
1-
INTRODUCTIa."1 .
2.
DERIVATION OF THE ESTIMATOR
2.1 Experimental Design Model
2.2 Existence of The Minimum Bias Estimator
2·3 Relationships Between ·The Minimum Bias Condition
and The Box-Draper Condition
3·
2r FACTORIAL EXPERIMENTS,
3·1 Second Order Model
3·2 Fractional Factorial Designs
3.3 Approximating A Second Order Model With A First
•
•
1
6
6
7
13
17
17
20
Order Function
24
4.
SUMMARY AND CONCLUSIONS
48
5·
REFERENCES
50
6.
APPENDIX .
52
6.1
6.2
6.3
6.4
Regional Moments
Variance Term V
Theorem 2.2
Theorem 2.3 .
.
53
53
53
54
•
INTRODUCTION
1.
Many experimenters encounter the problem of studying a process
involving variables defined on a region "R" called the region of
experimentation.
Often a mathematical model which is a function of
the process variables and parameters is used to approximate the response of the process over such a region,
R
Discrepancies between
the approximate response and the true response of a process stem from
two sources
1.
the experimental or sampling error
(i.~.,
variance
error), and
2.
•
the inadequacy of the approximating
function to represent exactly the true response
model
(:!:..~.,
bias error).
Even when an experimenter .: knows
the form of the true response model,
he may still want to use a different form for the approximating .. ·
function for simplicity or because of constraints on his time and
cost.
To minimize the effect of variance error and bias error, the
experimenter needs to know how to conduct an experiment to estimate
the parameters for his simpler approximating function.
Prior to 1959, many experimental designs were proposed to
minimize the variance error of the approximating function without regarding the bias error.
In 1959, Box
~~d
Draper considered both the
variance error and the bias error by studying designs that minimize
•
the mean square error (M.S. E.) averaged over the region of
•
2
experimentation
R.
They investigated the situation when the true
response can be represented as a polynomial;
R , of degree
X €
d+K-l.
of the additional terms from order
~2
is made up of the
The vector
terms required for a polynomial of degree
and
~(!) ~ !i~l + !~2 '
d
d - 1
to order
and
d + k - 1
being the corresponding vectors of coefficients,
response polynomial is approximated by a polynomial;
degree
~l
d - 1
with
assuming that
+ e.
•
j
~
0
for
i
13-1
wi th
The true
~ !f~l
of
2.
~2 ~
• c • ,
y
is a vector
being the ordinary least squares estimator of
~l
If the observed response at the i
E(eie )
~
*j
J.
,
where
th
experimental point (i ~ 1,2,
E(e.)~O,
J.
then Box and Draper defined the average
M.S.K over R to be:
J
where
~ ~ fR E[y(!.) - ~(!.)J2~ ,
(f
n- l ~ fR dx.
(1. 1)
The average M.S.E. is the sum of a bias term
and a variance term (V)
C!:.. ~.,
(E),
J ~ B + V ), where
(1.2)
the average square bias over
R, and
(1. 3)
the average variance over
experimental design
•
R.
With respect to the choice of an
R* , the minimization of
tribution of the relative magnitudes of
B and
J
'depends on the conV.
Box and Draper
•
found that unless
V is at least four times as large as
design moments for the minimum
J
B, the
design are nearly equal to the de-
sign moments for the minimum B design, ignoring the variance term V
completely.
Because the contribution of
B dominates the contribution of
V
to the average M.S.E., Box and Draper proposed that:
~l
estimate
2.
by least squares, and
R* which minimizes
find an experimental region
the bias term
B.
is a matrix of the design points in
•
R* and if
ordinary least squares estimation is used, then a necessary and
sufficient condition for a design to minimize the bias term is:
(1.4)
where
1
(X~X . )
~ J
n
i
:s:
j
,
i
and
H.. =
~J
aIR x.x!dx-
1, 2,
and
j
-~-J
=
1, 2 .
The condition on the design momemts given in (1.4) is called the BoxDraper condition.
Karson, Manson and Hader (1969) approached the minimization of
in a different manner.
•
1.
estimate
and
They proposed that:
~l'sO
as to minimize the bias term
B,
J
•
4
2.
find an experimental design
R*
which minimizes
the variance term.
With this method, they were able to find designs with smaller values
of
J
than those designs which satisfy the Box-Draper condition.
In each of the following applications of the Karson, Manson, and
Hader method, the approximating function is a low order polynomial
with variables defined over a continuous space.
APPROXIMATING
FUNCTION
TRUE
MODEL
1.
•
2.
HIGHER ORDER
POLYNOMIALS
cx
ql = a + be
AUTHORS
LOWER ORDER
POLYNOMIALS
KARSON, MANSON,
AND HADER (1969)
LOW ORDER
POLYNOMIALS
PARISH (1969)
3·
MIXTURE PROBLEM:
POLYNOMIALS
OVER REGULAR
N-GONS
LOW ORDER
POLYNOMIALS
OVER REGULAR
N-GONS
PAKU, MANSON,
AND NELSON (1971)
4.
RATIONAL
FUNCTIONS
LOW ORDER
POLYNOMIALS
COTE, MANSON
AND HADER (1973)
This dissertation is concerned with applying the Karson, Manson,
and Hader method to the classical experimental design situation where
there are ma9Y factors each having a finite number of levels.
this situation the region of experimentation
For
R consists of a finite
number of lattice points.
The general results derived for the classical design situation
are applied to the problem experimenters have in using a simple
•
approximating function in
2r
factorial experiments.
An
experimenter
•
5
may desire to use a main effects model (!.~., a first order function)
consisting of the estimated mean and main effects to approximate a
process while realizing that two-factor interactions may not be
negligible.
The experimenter has the usual recourse of performing a
resolution IV design so that all the ordinary least squares estimators
of the main effects are not biased by two-factor interactions.
standard method does minimize the bias term
This
B for regular fractions.
However, irregular fractions may have resolution IV, but fail to
minimize the bias
minimize
B due to the two-factor interactions.
In order to
B, the experimenter can use an irregular fraction that
allows the existence of minimum bias estimators of the mean and main
effects in the approximating function.
•
•
Tables and criteria are
presented for irregular fractions so that an experimenter can choose
from among those designs which allow the existence of minimum bias
estimators.
•
6
2.
2.1
DERIVATION OF THE ESTIMATOR
Experimental Design Medel
An experiment is to be performed with a set of variables that
have a finite number of levels.
•
The experimental points are selected
from a region of experimentation
R, which is a set of points re-
presenting all combinations of the levels of the experimental
variables,
point in
i
=
The region
R is a set of m-lattice points with the i
being represented by a vector
R
1, . " .,
ill
1\10
(Z'1:z'2)
-..I.
.-J.
examples of such a region are:
€
th
R , for
the set of points
from an N-way classification experiment where each classification is
from a finite population; and the set of points from a factorial
•
experiment where each factor has a finite number of levels .
At the i
th
R
point in
of a process in the region
let
~(z_)
-].
represent the true response
R at that point.
The vector of true
responses is assumed to have the form:
(2.1)
where
[~12'
.
a
52
= [~(~l)'
••• J
kX 1
Z
-=m2
]1
,
... ,
~(z
""1Il
)J 1
,
.£1 is a ·Pl X 1
= [~ll'
••• J
Z
]
""1Ill-
1
,
Z2
vector of parameters, and
=
.£2
is
vector of parameters.
The matrices
Zl
and
Z2
necessarily full column rank.
in the true model (2.1) are not
To facilitate the derivation of the
minimum bias estimator, the matrix
•
Zl
column rank.
Zl
is reparameterized to full
Such a reparameterization can always be done so that no
loss of generality occurs.
Whether or not
Z~
c.
is of full column rank
•
7
is unimportant in the derivation of the minimum bias estimator a,,':L
therefore, no assumptions about the rank of
model with matrix
Xl
.,
Zl
Z2 are made.
The true
reparameterized to the full column rank matrix
is expressed as:
where
m
'"
~ [cp(x_ ), ""
l
x. ~ [x' : x' ],
--:il. -i2
'
cp(x-m)]',
-~
Xl ~ [~ll' ••• , ~l]' , a matrix of full column rank
X ~ [X l :X ] ,
2
~ ~ [~i:~:P'
~l
is a
p X 1
vector of parameters,
i ~ 1, 2, ... , m,
p,
X2 ~ Z2 '
~2 ~ £2 ' and
.
In practice, the true model is almost always unknown and is
•
therefore usually approximated by a function that has fewer terms than
the true model.
terms in
~l
Let the estimates of the p-parameters in ~l
be the
of the approximating function:
(2.3)
Each estimator in
b
-1
is choosen to be a linear combination of the
observed responses of an experimental design.
The experimental design
R* is a subregion of R made up of the n-experimental points
and the vectors associated with
A linear transformation of the
chosen to estimate
2.2
•
R* are labeled with an
n X 1
~l (~.~., ~l ~
(n
~
m),
asterisks~(*).
vector of observations
'i..*
is
T'Y..* ) •
Existence of The Minimum Bias Estimator
Two properties that are desired in an
~pproximating
function are:
small bias in estimating the true model; and small variance
(~.~.,
•
8
var[~:J ).
The criterion that is used to measure these two properties
jointly .is the summed mean square error, which is normalized with
respect to the number of points in the experimental design and the
variance of the observed response.
point in
The observed response at the i
R is represented by y(x.)
-~
= ~(x.)
-~
.
+ e<
th
where
= 1, 2, .. " m and the random variable e i has E(e.) = 0 ,
2
2
E(e i ) = G , and E(eie j ) = o for i '" j
The normalized, summed
i
~
~
mean square error (SMSE) of
y, is then defined as:
m
J
=
n 2 .I:
mG
E([Y(!i);' cp(!i)]2}
X.
-~
~=l
€
(2.4)
R •
The SMSE is the sum of a bias term B and a variance term V.
•
J
=
Bias Term + Variance Term = B + V , where
B
=
~
ma-
m
I:
i =1
(E[Y(x.)J - cp(x. )}2 ,
-:l.
.
-:l.
and
m
V"
(2.6)
n2 . E VarcY (!i )] •
mG
~=l
The Karson, Manson, and Hader approach is used to achieve
acceptably small levels of
J.
First an estimator
~l
is selected
so as to minimize the bias term B , and then an experimental design
with an acceptably small variance term
V is chosen to satisfy other
design criteria desired by the experimenter.
Let
~l
= E(~l)
, then substituting
the bias term B becomes:
•
(2.2) and (2.3) into (2.5)
•
-9
1
H.. = -
where
The
I: x .x! .
m h=l -h1.-nJ
J.J
H..
J.J
m
and
i = 1, 2
j
= 1, 2 .
matrices can be shown (see Appendix 6.1) to be:
1
HJ.' ' = - X!X.
m J. J
J
for
i = 1, 2,
~~d
j
(2.8)
= 1, 2 •
~l
The first derivative of (2.7) with respect to
is:
(2·9)
Since the second derivative with respect to
~1
definite matrix, minimum B is achieved by setting
is a positive
(2.9)
equal to zero
to get:
(2.10)
•
Because the matrix
Zl
column rank matrix
Xl' the matrix
ordinary inverse.
in (2.1) is reparameterized to the full
H
is of full rank and has an
ll
Therefore,
(2.11)
Let
£1 be a linear transformation
from a conducted experiment, then
T of the observed values
£1 =
*
T'~
where
T is a
*
~
nX p
matrix, and
(2.12)
If
£1
is to minimize the bias term B,
E(£l)
must satisfy both
(2.11) and (2.12), that is:
•
T'X*~ = G§.
(2.13 )
•
],.0
This means that the matrix
T. is the solutior. to the following matrix
equation:
*
T'X
(2.14)
= G .
To solve (2.14), the generalized inverse of a matrix is used.
generalized inverse of a
such that
unique.
If
S
mX n
matrix
The matrix
S
is a non-singular
ordinary inverse; and if
S
is a
n X m matrix
A
S-
always exists, but may not be
m X m matrix,
S
-
= S
-1
, the
S is .singular; the generalized in-
verse can be represented by
S-
=
(S'S)-S'
The solution to (2.14) is given as Theorem 6.3.3 in Graybill
,
•
This theorem says that a necessary and sufficient condition
for the matrix
T to exist and satisfy (2.14) is:
GX
( *)-X* =G,
or
G(X' *X* )-X' *X* = G .
X
allows the
existence of a minimum bias estimator if and only if
G!?
is
estimable.
T
When
~
is estimab·le, the general solution for the matrix
is:
T' = G(X' **-*
X ) X' + G(l - X* (X' **-*
X ) X' ) ,
•
*
Condition (2.15) implies that an experimental design
where
G is an arbitrary
p X n
matrix.
(2.16 )
•
11
Since
G is an arbitrary matrix, there are an infinite number of
E.l = Til*
estimators
that can minimize the bias term
B.
If
G should be chosen to minimize the
possible, the arbitrary matrix
variance term
V
and therefore
G, it is sufficient to write the expressions for
and
To show the variance term is a function of
~
Var(y) .
The vector of observed values of an experiment is
where
T,
* *)
E(~~'
= I~2
.
Therefore, the variance of
El
l* =
*
~
is:
= var(T'l* ) = T'Var(l* )T =(TIT)~ 2 .
•
x.
-1
E
R
is:
'V (b)
- , IT"
T·).xOl~'
2
Varyx.
[ ~- (-1 )] =x·1ar
-1
- lX·l-x
-1
-1 r \'"
-1o
Substituting
(2.17 )
(2.17), the variance of the approximating
Using the results of
function at a point
+ e* ,
(2.18 )
(2.17) and (2.18) into (2.6), the variance term V
can be expressed as:
(2.19)
The expression
(2.19) can be shown (see Appendix 6.2) to be
equivalent to a trace
(.:!:..~.,
Tr;' trace)
of a matrix:
(2.20)
With the variance term
V in
(2.20) now as a function of T, the
following theorem shows how to select the arbitrary matrix
•
order to minimize the variance term
V.
G in
•
12
Theorem 2.1
For the linear transformation
X* (XI *X* ) -X' * ) I where
variance term
T'
= G(X' *X* ) -X' *
G is an arbi trary
V is smallest when
G= 0
p
X
+ G(l -
n matrix, the
(.:!:..~.,
V(G = 0) s:
V(G,*,O».
Proof
SUbstituting the
T matrix in (2.16) into (2.20), the variance
term V as a function of the matrix
G
(.:!:..~.,
V(G»
is the sum of
two traces:
•
+ ;
Tr[X1G(I - X* (X' **-*
X ) X' )(1 - X·* (X' **-*
X ) X' )G'Xi J
'
The matrices in both traces are non-negative definite (Graybill,
(1969», and therefore, both traces are greater or equal to zero.
cause only one of the traces contain the matrix
Be-
G, V(G = 0) s:
V(G '*' 0) .
We shaLl call the estimator having the matrix
G= 0
(.:!:..~.,
£1 = G(X*X* ) -X*if.* ) the "minimum bias estimator",
To find the minimum value of the bias term
B
the minimum bias
estimator is substituted into (2.7) to give:
(2.21)
In the class of designs that allow the existence of a minimum bias
•
estimator
£1' the value of
B
MB
is the same for every design.
•
From this class of
desi~~s,
the experimenter is free to choose the
design for which the value of the variance term
select a design with an acceptably small
V-
is smallest or to
V using other design
cri teria.
If the ordinary least squares estimator
* -L_
£1 = (Xi*Xl)
-Xi*~*
is used instead of the minimum bias estimator,the value of the bias
term
B is:
(2.22)
where the matrices
and
the matrices of design moments.
B
18
•
B .
MB
are
(~.~.,
B S B
)
MB
LS
Designs that allow a minimum bias estimator and
also satisfy the BOX-Draper condition wi II have
2.3
2
does riot have a constant value
for every design and is at least as large as
for all designs.
= 1,
j
B
MB
= B
18
•
Relationships Between The Minimum Bias Condition and The Box-
Draper Condition
The following theorem is useful in studying the relationship
between the minimum bias condition and the Box-Draper condition.
theorem by Rhode
(1965)
A
on the generalized inverse of a partitioned
matrix is used in the proof of Theorem
2.2.
Theorem 2.2
The minimum bias condition
(!.~.,
G~
only if the following two conditions hold:
•
estimable) holds i f and
•
14
Proof
(See Appendix
6. 3).
The following corollary is a consequence of Theorem 2.2.
Corollary 2.1
If
•
* is of full column rank and the Box-Draper condition is
Xl
satisfied then the minimum bias condition is satisfied.
Proof
--If
*
Xl
is of full column rank then
(I - M~~ll) ; 0
M
ll
is non-singular and
which satisfies condition one in Theorem 2.3.
the Box··Draper condition holds, then
(H~iH12
-
M~~12)
; 0
If
which
satisfies the second condition in Theorem 2.3.
Corollary 2.1 shows that those designs that satisfy the BoxDraper condition are a subclass 'of all designs that satisfy the
minimum bias condition.
For this subclass of designs, the minimum
bias estimator becomes the ordinary least squares estimator.
Theorem 2.3
If the minimum bias estimator
•
*
~l; T'~
exists then:
•
15
and
D = X
2*FX*2
Proof'
(See Appendix 6.4).
Theorem 2.3 means the minimum bias estimator is the sum of the
ordinary least squares estimator and another component that is the
product of the matrices contained in the Box-Draper condition and a
kX 1
vector.
In determining when the minimum bias estimator becomes the
ordinary least squares estimator, it is sufficient to compare the
•
variance term
V of the two estimators,
need not actually be computed).
the variance term
(~.~.,
the two estimators
Using the form of
T in Theorem 2.3,
V for the minimum bias estimator can be expressed
as the sum of two traces:
(2.23 )
The first trace is the
variance term using the ordinary least squares estimator:
and the second trace is of a non-negative definite matrix.
trace terms are greater or equal to zero,
if and only if the matrix
•
R =0
When
V
MB
R
~
=0
V '
LS
Since both
Equality holds
, the minimum bias
estimator becomes the ordinary least squares estimator, therefore,
•
V
MB
= VLS
is a necessary and sufficient condition for the minimum
bias estimator to equal the ordinary least squares estimator.
Theorems 2.2 and 2.3 illustrate the additional flexibility
available in using the Karson, Manson, and Hader method over the BoxDraper method.
When an experimenter is using designs that satisfy
the Box-Draper condition, both methods give the ordinary least squares
estimator and therefore, the same variance term
V and bias term
B.
However, using the Karson, Manson, and Hader method, the experimenter
has the
adva~tage
of choosing from a larger class of designs while
still minimizing the bias term
B, and thus obtains greater design
flexibility to meet other criteria which an experimenter may wish to
•
•
impose.
•
17
3.
3.1
21'
FACTORIAL EXPER~S
Second Order Model
When an experimenter is working with r-factoI's each at two
levels, the region of experimentation
R, consists of the
m
= 21'
points representing all combinations of the high and low levels of the
r-factors.
Over the region
R, a true response model consisting of
the mean, the main effects and the two-factor interactions
(!.~.,
a
second order model) is approximated by a first order function consisting of an estimated mean and estimated main effects.
To illustrate the notation to be used in a general
experiment, consider an example with
•
si tuation, let
(i = 1,
I' = 2
denote the overall mean;
th
r) at the glevel, g = 1, 2
Il
factors.
the
Tig
21'
factorial
In the general
.th
~-
factor,
and
(Ti)gh the twoth
th
th
.th
factor interaction of the i
and J
- f aC t aI's at the g- and h- level.
~
•• J
Then for a
22
factorial design the true model;
S2 = Zl""l + Z2"-2
is
wri tten as:
"'1
"'2
1
1
0
1
0
1
1
0
0
1
T
ll
0
0
0
(T 12 )1l
0
1
0
0
(T 12 )12
+
=
•
1
1
0
1
1
0
T
12
0
0
1
0
(T 12 )21
1
0
1
0
1
T
21
0
0
0
1
(T 12 )22
T ,
2.c
18
In the matrix
meter in
-"-1 .
the
~,
me~~
parameter
Zl ' each column vector corresponds to a para-
The first column vector
~o
is associated with the
the second column vector
T
, etc.
ll
Similarly, each column vector in the
matrix corresponds to a parameter in
associated with the
column vector
(T .. ) h
~J
(z .. )"
-~J
= 1 , is associated with
-"-2
and the column vector
parameter is denoted by
g..
Z2
(Z:J' )gh'
-
The
is derived by multiplying the corresponding
g,.
elements of the main effect column vectors
and -J
z'h.
This
elementwise multiplication of vectors is called a Hadamard product
(Margolin, (1969b)) and will be denoted by z. e z'h = (z .. ) h •
-~g
-J
-~J g
2
vector is the Hadamard
the 2
factorial example, the (z )
For
-12 12
•
product of
~ll
and
~22
Z
-11
~22
1
o
1
1
=
or
o
1
=
o
o
o
o
1
o
Using the Hadamard products, the true response model for a
'"
2-
factorial example may be written as a linear combination of the vector
and the vectors that are associated with the main effects:
•
!£ = ~~ +
2
E
2
E
i=l g=l
T.
Z.
~g-J.g
+
2
2
E
E
g=l h=l
•
19
In tha true model, the
matrix
Xl
Zl matrix is reparameterized to the
of i'ull column rank.
orthogonal reparameterization of
r
2
F or a
x.
-1
factorial experiment, the
i
; -1.
z'l - -1
z'2 '
:;;: 1, ... , r ; is
used, so that for each factor, the high level is denoted by a plus one
(+1)
all
and the low level is denoted by a minus one (-1)
2
r
The set of
reparameterized column vectors for a complete factorial
experiment is denoted by the set
'X
-1
~
x,....,
"X }
x '"
X
-3'
"',• ... ,• "',• _IV
-r
linearly independent and orthogonal.
-"
These column vectors are
The mean column vector is a
vector of ones (~.~., ~O =~) , and the interaction column vectors are
the Hadamard products of main effect column vectors.
•
vectors in the matrix
Xl
is a subset of the
The column
r
FD(2 ) column vectors
consisting of the mean column vector and the main effect column
Xl; (~, ~l' ••• , ~r)
vectors;
Let t h_,e parame t er
Fi ; Til - Ti2
Then the reparameterized true model;
.th
represen t th e 1
. - f ac t or.
~; Xl~l + X~2 for the 22
factorial example is:
Cl'l
1 +1 +1
Cl'2
1 +1 -1
F
l
1 -1 +1
•
(T12 )ll
010 0
(T12 )12
o0
1 0
(T 12 )21
0 0 1
(T12 )22
+
;
Cl'4
1000
1 -1 -1
F
2
o
•
20
For a general
21'
factorial experiment, the second order model
!O'
may be written as a linear combination of the vectors
(Z.
<;Jz'h),for
-lg
-J
and
i=l, •.• , r ;
= 1,
g
2
h.= 1, 2
l'
;Q
j = i + l , ... , r
! i ' and
+
= 1-l!0 + E F.x.
2-~
i;::l
r-l
l'
E
E
2
E
2
E
i=l j=i+l g=l h=l
(Ti)gh(~ig
f:) Z.
-Jh
)
.
The parameters in the approximating function for the second order
model in (3.1) are estimated from a fraction of the points in a
complete
21'
factorial design.
The construction of these fractional
factorial designs is described in the next section.
3.2
•
Fractional Factorial Designs
The second order model in (3.1) is approximated by a first order
function,
l =
h
*
Xl~l
consisting of an estimated mean and estimated
main effects:
l'
*
+ E F.x.
1-1
i=l
h
An
n X 1
column vector in the experimental design matrix
* ~ (~,
* ~l'
*
Xl
1",
*
x~)
is formed from a z~bset of elements in the
corresponding column vector in the matrix
For each experimental design of
.J<-
... ,
corresponding to the
•
points is
n
s;
.J<-
X
*
1"',)
fractional factorial design
*
~l <:J ~2'
:
-1"'"
r
(2 )
(!o' !l' ..• ,
n-points, there is a set of
column vectors that is called 'a
* -.L
x, ,
FFD(n) = (~O'
Xl =
set.
:
.. , ... ,
... ; ~l*
*
<;J ..• ex}
-r
Since the number of experimental
m , the col'llnn vectors in
FFD(n)
are not linearly
•
21
independent unless
= m.
n
Fractional factorial designs are con-
structed by specifying linear relationships that the column vectors
of
FFD(n)
must satisfy.
x*
Two column vectors
aliased if
x* =
and
a.:l.* , for
~
2r
taken to be + 1 in
l*
a > 0
in
F.FD(n)
are said to be
The alias scalar,
factorial experiments.
~
a , is
Fractional
factorial designs in this dissertation are constructed by specifying
the column vectors in
*
(~.~.,
vector
~ =
FFD(n)
!).
that are aliased with the mean column
These column vectors form an identity
relation set (IDR) and this set is denoted by setting
"I"
equal to
the letters representing the factors or interaction of factors that
•
*
are aliased wi th
= -
ABDE
*
Xl.
() X
*
-5
For example the IIlR,
I = + ABC = - CDE
means the following column vectors in
*
!Sol
aliased:
--<+
~.
=
e !So2*
-x* •
::.0
<:>
* = ~,
**
!So3
!So3
e ~* e !So5*
*
= -~ ,
FFD(n)
and
are
*
!Sol
e !So2* e
Each column vector in an IDR is called a word and
the Hadamard product of any two words forms a word that is also in the
IDR
A set of words in an
IDR
that are not the Hadamard products
of each other, generate the remaining words by taking the Hadamard
product of all possible combinations of these generators.
of s-generators,
of 2
s
2
s
- s - 1
- 1 words in the
be specified to define an
From a set
words can be generated to give a total
IDR
IDR
Therefore, only the generators need
Two
IIlR I s
wi th the same
generators, but with different alias scalars are said to be in the
same
•
IDR
family.
Once an
IIlR
column vectors in
is defined, one can compute all of the aliased
FFD(n)
by using the properties of the Hadamard
•
22
product.
For example, if
* is aliased wi th
then !l
sides of the
IDR
*
*
*
*
*
~l
FFD(n)
;;here each class is the set of the
IDR,
is a word in an
*
* * *
0 ~20~=~20!3
*
x 13-x =-x
-1
~
-1
and
generators partitions the
*
because by mUltiplying both
- ~2 0 ~3
* ' ;;e get !l-* 1:1
by ~l
* 0 ~l* =}, )
(since !l
aliased.
*
*
*
~l 0 ~2 0 ~3 = -~
set into
2
s
An
2
r-s
IDR wi th sequivalence classes
column vectors that are
If the word length of a column vector in
FFD(n)
is
defined as the number of main effect column vectors in the Hadamard
product that are used to define that column vector, then the column
vector in each class with the smallest word length is used to
represent the equivalence class.
•
equivalence classes by an
IDR
The partitioning of
FFD(n)
into
is analogous to partitioning the set
of integers into equivalence classes using modulus arithmetic.
With
the Hadamard product as the binary operator, the equivalence classes
form an algebraic group with the class containing
*
~
being the
identity element of the groupFor example, the
IDR,
I
= AB
, partitions the set
FFD(n
=
~-l ) = (~,
* ~l'
* !2'
* ~,
* ~l* 0
into four equivalence classes where each class is represented by the
column vector with the smallest word length:
•
•
23
"*
AC .... [~l
* 0 ~3'
* ~2*
~J = {~l
+
Ij
o -3
x* }
The mUltiplication table of the group that is formed by these
equivalence classes and the corresponding table of letter
representations are as follows:
-l(-
*
*
~O
*
~O
*
~l
*
~3
-x~3
*
~l 0 ~3
*
*
~
*
*
~l e~
*
~l 0 ~3
*
~3
~
*
* 0 ~3*
~l
0
I
A C AC
*
* 0 ~3*
~l
r
I
A C AC
* 0 ~3*
~l
~3
*
A
A I
*
C
C AC I
*
AC I AC C A I
~3
~l
*
~l
*
•
*
~l
~()
<;J
*
~
~l
*
~
~l
AC C
A
For one to construct an experimental design using a specified
IDR
,
he needs only to determine the main effect column vectors.
Once the main effect column vectors are determined, the interaction
column vectors are the Hadamard products of the main effect column
vectors while the mean column vector is
* =
~O
.:!:
An
IDR
wi
th s-
generators places s-restrictions on the r-main effect column vectors
and the remaining
r - s
main effect column vectors are unrestricted.
Therefore, the main effect column vectors of a complete
factorial design carl be const;ructed and the remaining
column vectors are defined using the
fractional factorial design with
•
n
r s
FFD(2 - ) ) ca~ be constructed from an
r - s
s-main effect
IDR. In this manner, a
r s
2 points C!:!.., FFD(n) =
IDR with s-generators.
•
24
If the number of points in a
design is
n = 2
r-s
regular fraction.
K2
(~.. ~.,K=l)
fraction of a
2
r
factorial
then the design is called a
Regular fractions can be combined to form
r s
n = K(2 - )
fractional factorial designs with
•.. )
-s
which are called irregular fractions.
points, (K = 3,
The column vectors for
the main effects of an irregular fraction with
r s
K(2 - )
points are
constructed by combining the main effect column vectors of
regular fractions from the same
illR
5,
K
family.
Regular and irregular fractions are classified by the order of
the effects that can be estimated from the design.
If main effects
are called first order effects, two-factor interactions are called
•
second .order effects, etc., and those effects that are assumed to be
zero or near zero are called negligible, then a fractional factorial
design is of even or odd resolution (John, (1971)) if:
Order of
Estimable Effects
Resolution
Order of
Negligible Effects
2t + 1
t or less
t + 1 or greater
2t
t - 1 or less
t + 1 or greater
For regular fractions, the resolution of the fraction is equal to the
length of the smallest word in the
3.3
Approximating A Second Order Model With A First Order Function
If the estimator
•
IDR.
~l
in the first order approximating function
(3.2) is to be a minimum bias estimator, a design
so that
G~ = (I:H~~12)~
is estimable.
* must be chosen
Xl
Since the matrix
Xl
is of
•
25
full rank and is orthogonal, the matrix
the identity matrix of size
Xl
= ~ x1xl = ~
P = r + 1 , and the matrix
The column vectors of the matrix
column vectors of
Hll
Zl
mI p = I p
H
l2
is
can be expressed in terms of the
by the relation
z.
-~g
= (_l)g+l[x. + (-l)g+llJ/2..
-~
-
Therefore, the interaction column vectors are:
Using expression (3.3) and the properties of Hadamard products, the
•
elements of the
H
12
matrix are:
~
x/(z. e -Jh
z. ) = ~
for every i,
m -0 -~g
'l-
j, g,
and
if
p=i
if
p=j
h,
(3.4)
otherwise
Therefore, the estimable parameters,
the second order model (3.1) are:
•
~ = (I:H~~l2)~ = (I:~2)~ in
•
r
2
2
1l+4:.2:.2:
1 r-1
2:
r
l=l J=l+l g=l h=l
1
Fl + 4:
(TJ.J
.. )g._h
2
2
E E E (-1) g+l (T .).
l J gn
j=2 g=l h=l
r
1 p-l
2
2
+4: E
F
E
E
i=l g=l h=l
P
r
2
(3,5)
2
+ fEE
E
j=p+1 g=l h=l
r-l 2
2
Fr + f E E
E
i=l g=l h=l
•
If an experimental design allows a minimum bias estimator, there
must exist vectors
a
and
b
-p
,
P = 1, ,."
r ; so that the
~* , from the
expected values of the vector of observed responses
experimental design are:
*
E(~/,y ) =
Il
+
r
1 r-1
4:
2
2
E
E
E
E (T.. ) h
J.J g•.
i=l j=i+1 g=l h=l
and
E(b/y* ) = F
-IT'
1
+4:
p-1
2
E ( _1)h+1(T )
j;) gil
i=l g=l h=l
2
2
E
2: (_l)g+l(T.) h
PJ g.j=p+l g=l h=l
r.
2:
From (3.6), we also have
•
2
p + fEE
*
*
E(~/~) = ~/!I!
, Cl'
,
(3.6)
•
27
* = ~'f-u
*
u
E(~'~ )
+
*
rE
-a'x.F.
~ ~
i=l
2
2
r-l
r
* lJ )g.h
+ E
E
E
E a' (z.* ~ z·h)(T..
- -lg
-.J
i=l j=i+1 g=l h=l
(3.6) and (3.8) are equated, we get the following identities:
If
a/x
*
-=0
a'x.* = 0,
-
-l
i
* z'h)
* = -41
-a'(z.
-lg 0 -J
1 ,
for every
(3·9)
and
= 1, ... , r
i, j, g
and
h.
(3.3), the last identity in (3.9) implies:
Using
•
(3.8)
* =0
a' (x* 0 x.)
-
--:i. . -J
for every
i
and
(3.10)
j
3.1
Lemma
If a minimum bias estimator exists, then the vector
*
linearly independent of the vectors
* e ~j)'
*
(~i
i = 1, .•. , r - 1
q = 1,
... ,
~'
j = i + 1, .•. , r
and
r
*
~O
must be
; and
Proof
-Suppose
*
~
is a linear combination of the vectors
* , the there exist constants
(x.* 0 x.)
-l
C ,
ij
C
q
-J
i = 1, .•. , r - 1 ,
j
=
i
+ 1,
that:
•
r
r
r-l
E
= E C x* + E
Cij (\~i*
q-q
i=l j=i+1
q=l
q - 1,
.....
x*
-q
and
;
and
•.• : .:r
r ; !wt all zero such
•
28
Since a
mi~imum
bias estimator exists, there exists a vector
satisfies (3.6).
a
which
Therefore,
a'x*
C
-0=0
a'x* +
q-
(3.11)
q
The value of the left hand side of (3.11) is one because the vector
a
satisfies (3.9) and the value of the right hand side of (3.11) is ~
because the vector
a
T.~erefore,
satisfies both (3·9) and (3.10).
1 = 0
the contradiction that
implies the statement of the lemma is
true.
A similar result to Lemma 3.1 can be derived for the main effect
column vectors
•
... ,
*
x.
-~
,
... ,
i = 1,
r
If
E(b'y* )
-rr-
='E.(}£* ,
p = 1,
r ; is equated to (3·7), we get the identities:
b'x*
-po=o
= 0 ,
b'x*
1 ,
-p-p
b'x*
-p-q
*
b' (z.
-p
-~g
= 0,
*
e ~jh)
for
=
q
= 1,
~~d
•.. , r
(_i)g+1/4
for
j
( _i)h+l/ 4
for
i = 1,
,--0
p
q
*P ,
+ 1,
••• J
(3.12)
... , r ,
P
-
otherwise.
Again using (3.3), the last identity in (3.11) implies:
•
for
* = 0
b' (x.* (;) x.)
--p
--0.
-J
for every
i
and
j
1
,
•
29
Lemma 3.2
x*
-p
If a minimum bias estimator exists, then the vectors
*
p = 1, •.. , r , are linearly independent of the vectors
q = 1, ..• , r
and
q
*p
and
;
(x,*
-.l.
~
x*
!o '
-q
* , i = 1, ... , r - 1 ,
x.)
and
-J
j = i + 1 , ... , r
Proof
--x*
Assume
(~*
and
r
with
-p
*
x.)
<:>
-J
is a linear combination of the vectors
*
!O'
then there exist constants
q = 1,
,
*p
q
and,
C ,
ij
Co '
i = 1, •.• , r - 1
C
q
= i
j
x*
,
-q
... ,
+ 1, ..• , r
not all zero such that
•
*
C~o
*
ex
q-q
+
r-1
r
!:
+ .. !:
C..
i=l j=i+1 :q
(x.*
-~
<:>
*
x.)
-J
Since a minimum bias estimator exists, there exists a vector
every
p
that satisfies (3.7).
,for
Therefore,
r
r-1
r
*
C b/x* + !:
E
C..b ' (x.
q-p-q
q=l
i=l j=i+l ~J-P-~
b/x* = C~Q+
*
-~
b
-p
!:
C;)
*.
x.)
-.J
(3.14 )
q*p
For each
p, the value of the left hand side of (3.14) is one be-
cause the vector
b
-p
satisfies (3.12) and the value of the right hand
side of (3.14) is zero because the vector
and
(3.13).
b
-p
Therefore, the contradiction that
satisfies both (3.12)
1 = 0
implies that
the statement of the lemma is true.
The condition that there be no linear dependencies involving main
•
effect column vectors in the subset of column vectors for the mean,
main effects and two' factor interactions from the
TID(n)
set is a
•
30
necessary
~~d
sufficient condition for a fractional factorial design
to be of resolution IV (Margolin, (19Cl9a».
Margolin also shows that
the minimum number of points that is needed for a resolution IV design
with r-two level factors is
"fold-over" design.
If
r
2r
and the minimal design must be a
is a power of two, the minimal design is
an orthogonal, resolution IV design and if
the design is a
non-ort~ogonal,
r
~s
not a power of two,
resolution IV design.
Therefore, for
designs that allow the existence of minimum bias estimators, the
following theorem can be stated.
Theorem 3.1
If a fractional factorial design allows the existence of a minimum
•
bias estimator, then the design must be a resolution IV design.
Proof
--Since a minimum bias estimator exists, Lemma 3.2 states that
there can be no linear dependencies involving main effect column
vectors in the set of colillllIl vectors for the mean, main effects and
two--factor interactions.
Hence, the fractional factorial design is a
resolution IV design.
An example of a fractional factorial design that aJ_lo-"5 the
existence of a minimum bias estimator and has the
points is a
2
min~mum
4-1 design that is defined by the IDE ,
Since the word length is four and the number of factors
power of two, this regular fraction is an orthogonal,
•
design.
number of
I = + ABeD
r
4 is a
~'esolution
IV
•
31
Tne converse of Theorem 3.1 is not true.
following ten point irregular fraction
For
4
5(2 5 - )
~{ample,
the
is a resolution IV
design but does not yield a minimum bias estimate1'.
Example:
•
A
B
C
D
E
+1
+1
+1
+1
-1
+1
-1
-1
-1
-1
-1
+l
-1
-1
-1
-1
-1
+1
-1
-1
-1
-1
-1
+1
-1
-1
-1
~l
-1
+1
.:.1
+1
+1
+1
+1
+1
-1
+1
+1
+1
+1
+1
-1
+1
+1
+1
+1
+1
-1
+1
However, Theorem 3.2 shows that the converse is true for designs
that are regular fractions.
Theorem 3.2
If a regular fraction is a resolution IV design, then the design
allows the existence of a minimum bias estimator.
Proof
To prove this theorem the assumptions are shown to imply that the
Box-Draper condi tion (~.. !O.,
in the
IDR
H~~12
=
M~~12) holds.
Since each word
has a word length of four or more letters, the mean and
main effect column vectors can only be aliased with column vectors
representing three factor or higher interactions.
•
vectors in
~D(n)
for the mean, the main
effe~ts,
Hence, the column
and the two
•
32
factor interactions are all orthogonal by construction.
the mat:!'ix
M
is
matrix M12
is
M12
e -r2
z* )
U
* = 1 nIp = I = H
MU = 1 Xi*Xl
n
p
n
=
n
1**
n
(~, ~l'
*
M
12
II1 for every i,
1
-v
\ (_1)g+1/4
X'
* (z.*
n -p
•
-~g
e -Jh
z:* )
=
C
lhH 4
By comparing each element of
M
12
(3.15), we see that
H~~12
*
-li-
~'21' ... , ~-l, 2
e
are:
* *
*
=
~ (z.
t:I z -h)
n
Sg
-J
1
' a.. :!d the
Using the properties of the ¥.adamard products and (3.3), the
elements of
-
*
... , ~)' (~u
Therefore,
=
M~~12
j, g,
if
p =
i
:
if
p = j
,
and
h,
/
otherwise.
(3.4) with each element of
H
12
H
= M12 .
12
Therefore, the condition
is satisfied and so by Corollary 2.1 a minimum bias
estimator exists.
Because regular fractions of resolution IV satisfy the BoxDraper condition, the minimum bias estimator becomes the ordinary
least squares estimator.
Therefore, the minimum bias term
B is
minimized by using regular fractions that are resolution IV designs.
Theorem 3.2 provides an easy method for showing when a regular
fraction allows the existence of a minimum bias estimator.
length of the shortest word in an
IDR
is equal to the resolution of
a regular fraction, the maximum average word length in an
•
be at least fOUl'.
Since the
IDR
must
For example, the maximum average word lengths for
1/2, 1/4, and 1/8 fractions of
2
r
,
I'
= 3, 4, 5, 6, 7,
are listed in
•
33
Table 3.1.
From Table 3.1, we see that for seven or more factors
there are always 1/2, 1/4 arId 1/8 fractions that allow th8 existence
of a minimum bias estimator because the maximum average word length is
four or greater.
Table 3.1
Maximum average word length for
2r - s
. s-l
1'2
s
r2 - 1/2 s -1
Max Ave.
Word
Length
Factor
No. of
Design
Points
3
1/8
1
7
12
1. 71
1/4
2
6
2.00
1/2
4
3
1
3
3·00
1/8
2
16
2.29
1/4
1/2
4
8
7
3
1
8
4
2.67
4.00
1/8
1/4
4
. 20
2.86
8
10
3·33
1/2
16
7
3
1
5
5·00
1/8
8
7
24
3.43
1/4
16
.12
4.00
1/2
32
3
1
6
6.00
1/8
16
7
28
4.00
1/4
32
64
3
1
14
4.67
'7
7·00
4
5
6
7
1/2
If
•
fractions
-s
2
Fraction
I'
•
2s _ 1
NO· of
re~llar
~.IDR
Words
Max Sum
of Word
Lengths
K-regular fractions are combined, an irre~lar fraction with
r s
4 2
n = K(2 - ) points can be constructed. For examp~e, a 3(2 - )
irre~lar fraction can be constructed by combining three re~lar 24 - 2
•
34
designs that are defined by the same
2:,BCD = + ABC
If the
•
family;
I = +AD = + BCD = + ABC
Fraction 2:
I = -AD =
3:
I =
I = + AD =
for each regular fraction is:
Fraction 1:
Fraction
then the
IDR
IDR
-
,
+ ABC
BCD
AD = + BCD =
-
ABC
3 (24- 2 ) ir:regular fraction is represented by the following:
IDR
Fraction
1 2' 3
AD
+
BCD
+
ABC
+
+
+
For this example, four different sets of three IDE's could have been
chosen to construct the irregular fracti 0:1.
<:s
(K)
possible regular fractions from the same
In gene"a;!., ,.hich of the
IDR
family are used to
construct an irregular fraction, does not affect the existence or nonexistence of a minimum bias estimato:r, the value of the variance term
v,
or the value of the bias term
B.
However, in order to have a consistent method of constructiqg
i:rregular fractions, Addelman' s 'method (1961) i:; used to choose the
different regular fractions from an
fractions that are studied in this
IDR
falPil,.v.
dissertation
so Addelman' s method assigns an odd number of the
equal to
•
-1
for
IDR
For the irregular
K is
-1
for
IDR
odd number,
K alias scalars
wo:rds of odd length and an even
K alias scalars equal to
~~
nl~ber
words of even length.
of the
•
35
th
For the t-- regular fraction
construct an irregular f"action,
t ; 1, ... , K, that is used to
be the coll~ vectors for the mean, the i
ac t ~· on
0f
th• e ~-. th and J-.th
respectively.
.p
~
* and
~it'
*
~t;
let
th
* <:) ~jh\
*
(~ig
fact~ and the inter-
th and "th
ac tors a t th e g-c r - 1 eve 18 ,
*
Xl
Then the column vectors in the
X* matrices
2
and
of an irregular fraction are
* ; (~l'
* •.. , ~K)'
* ,
~
x.*
(3.15 )
-~
*•
*
* <:)z'h)l',
*
*
(_Z'g<:)z'h);[(z,
... , (z.
-J
-~g
-J-~g
•
Using the vectors in (3.15), the
Mll
and
* "e -J
z'h)f,]'
•
M12 macrices for an
irregular fraction can be expressed as the sum of the
Mllt
M
12t
and
matrices
1
; K
K
E Mut
1;;l
'
(3.16 )
M
12
1
;-
n
1
; -
K
E
K t;l
Of importance is the class of irregular fractions which is
constructed by combining resolution IV,
regul~
fractions.
Theorem 3.3
If an irregular fraction is constructed by combining regular
fractions of resolution IV, then the design allows the existence of a
•
minimum bias estimator .
•
36
Proof
As in the proof of Theo::-em 3.2, the
and the matrix
M
12t
n
--"0 -~g
41
=
J
for every
n
* -.g
* e -J
*
z'h)t
-p
( _1)h+l/ 4
=
o
•
Consequently, the matrices M
and
ll
1 K
are Mll = K E I = I
H ' ~'ld
ll
t=l P
P
= H12 , so
H~~12 = M~~12
~~d
i, j, g
,...(-l)g+1/4 i f
K
(_)X/t(Z~
= I
p
has the elements
* * e z'h)+
*
-J
K
(-)~(z.
matrix is
if
p
.- i
,
p
= j
,
h,
otherwise •
M
12
for the irregular fraction
1 K
1 K
M =- E M
= - E H
12
K t=l 12t
K t=l 12t
Note that
H12t
is the
defined in (3.4) for the tth regular fraction.
H12
matrix as
Since the Box-Draper
condition is satisfied, Corollary 2.1 guarantees the existence of a
minimum bias estimator.
Furthermore, the minimum bias estimator be-
comes the ordinary least squares estimator.
For seven or more factors there always exist resolution IV,
2r -2
and
2r - 3
regular fractions.
Therefor'e Tteroem 3.3 guarantees
that we can always combine these regular fractions to form
and
r
K(2 - 3 )
K(2r -2)
irregular fractions which allow mir..illlilm bias estimators.
Regular fractions which have resolution less than IV may also be
combined to form irregular fractions that allow minimum bias estimators.
To demonstrate this fact, the estimability of
•
.
uSlng
a camput er Brogram f or
3(2r - 2 )
and
~
vras examined by
'... ')
3 (2·-')
irr'egular
•
37
f!'actions constructed from l'egula!' fractions of resolution IV or less
r = 3, 4, 5
for
arld
6.
The generators of the defining
IDR family were selected
systeJnatically according to their word length and by how many letters
each generator has in commen with the other generators.
3 (21'-2)
For the
irregular fractions, there are no designs that allow a
minimum bias estimator if
= 3 , and no 3(21'-3)
l'
exist that allow a minimum bias estimator for
irregular fractions
5.
l' ~
Those
irregular fractions which allow the exist;ence of a minimum bias
6
3 (2 - 3 )
estimator are listed in Table 3.2 for
ble
Ta,
•
2
.J.
3
f or
0(21'-2)
desi~~s for
~
r
designs arId in
= 4, 5
Each design
in Tables 3.2 and 3.3 is classified by an N-tuple
the lengths of the words in the defining
IDE
(N - 2
fami~y.
s
- 1)
of
For each
design that allows the existence of a minimum bias estimator, the
values of the variance term
V for the minimum bias estimator and the
least squares estimator are listed in the tables,
If the values of the
two variance terms are equal, then the minimum bias estimator and the
least squares estimator are equivalent.
The only design in the two
tables where the values of the variance terms are equal is for the
3(26 -2)
defining
desi~'1 (4,4,4)
IDE
For this design, each word length in the
family is four.. so by Theorem 3.3, this design also
satisfies the Box-Draper condi tien,
The bias term
Q
•
B
is the quadratic form
n
B
="2
ma-
~2~2
'
where
is a non-negative definite matrix that is called the "bias matrix '1 •
For th-= minimum bias estimator
least s~lares estimator
Q/m
WID.
is defined in (2.21) and for the
is defined in (2~22).
F~ the designs
•
38
6
3(2 - 3 )
Table 3.2
No. Of
Design
Wactor Points
6
•
•
24
24
24
24
24
24
24
24
24
24
24
24
24
24
24
24
24
24
24
24
24
24
ir~e~ular fractions
Min.
Bias
Design
(1,1,1,2,2,2,3)
(1,1,2,2,3,3,4)
(1,1,2,3,4,4,5)
(1,1,2,4,5,5,6 )
(1,2,2,2,3,3,3)
(1,2,2,3,3,4,5)
(1,2,3,3,3,4,4)
(1,2,3,3,4,5,6)
(1,2,3,4,4,5,5)
(1,3,3,4,4,4,5)
(2, 2, 2, 2, 2, 2, 4)
(2, 2, 2, 2, 4, 4, 4)
(2,2,2,3,3,4,4)
(2,2,2,3,5,5,5)
(2,2,2,4,4,4,6)
(2,2,2,4,4,4,6)
(2,2,3,3,3,3,4)
(2,2,3,3,4,5,5)
(2,2,4,4,4,4,4)
(2,3,3,3,3,4,6)
(2,3,3,3,4,4,5)
(3,3,3,3,4,4,4)
•
No
No
No
No
No
No
No
No
No.
No
No
No
...No
No
No
No
No
Yes
Yes
Yes
Yes
No
Generators
Of The rn."
Min Bias
Family
Te!'m
Va:::iance
Least Sq.
Variance
Term
A, B, c.
A,B,CD
A,B,CDE
A,B,CDEF
A,BC,CD
A,BC,DE
A,BC,CDE
A, BC,DEF
A,BC,CDEF
A, ECD,DEF
AB,EC,CD
AB,BC,DE
AB,BC,CDE
AB,BC,DEF
AB,BC,CDEF
AB,CD,EF
AB,CD,ACE
AB,CD,AEF
9·750
AB, CD,ACEF
8.000
AB,BCD,AEl" 10.125
AB,CDE,ACF
9·375
ABC, CDE, AD.'?
7·500
7·500
7·250
7·250
•
39
Table 3.3
!Factor
4
5
•
•
6
3(2r -2)
i~regular fractions for
r = 4, 5,
No. Of
Design
Points
Design
Min.
Bias
12
12
12
12
12
12
(1,1,2)
(1,2,3)
(1,3,4)
(2,2,2)
(2,2,4)
(2,3,3 )
No
No
Yes
No
Yes
Yes
A, B
A, BC
A, BCD
AB, BC
AB, CD
AB, BCD
24
24
24
24
24
24
24
24
24
24
(1,1,2)
(1,2,3 )
(1,3,4)
2,2,2)
(2,2,4)
(2,3,3)
(2,3,5)
(2,4,4)
(3,3,4)
No
No
Yes
Yes
No
Yes
Yes
Yes
Yes
Yes
A, B
A, BC
A, BCD
A, BCDE
AB, BC
AB, CD
AB, BCD
AB, CDE
AB, BCDE
ABC, CDE
48
48
48
48
48
48
48
48
48
48
48
48
48
48
48
48
(1,1,2)
(1,2,3)
(1,3,4)
(1,4,5)
(1,5,6)
(2,2,2)
(2,2,4)
(2,3,3)
(2,3,5)
(2,4,4)
(2,4,6)
(2,5,5)
(3,3,4)
(3,3,6)
(3,4,5)
(4,4,4)
No
No
Yes
Yes
Yes
No
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
A, B
A, BC
A, BCD
A, BCDE
A, BCDEF
AB, BC
AB, CD
AB, BCD
AB, CDE
AB, BCDE
AB, CDEF
AB, BCDEF
ABC, CDE
ABC, DEF
ABC, CDEF
ABCD, CDEF
~1,4,5)
Generators
Of The IDR
Family
Min Bias
Variance
Term
~~d 6
Least Sq.
VariancE!
Term
6,750
5·250
6,000
7·125
5· 500
5·250
7,875
6·750
6.250
6.250
7·000
8,125
6·750
6.375
7·000
6·500
6.250
6.250
6.250
6.000
9·000
7·875
7·875
7·250
7·250
7·250
8.000
9·125
7·750
7·750
7·375
7·375
8.000
7·750
7·375
7·000
7·500
7·250
7·250
7·250
7·250
7,250
7·000
7·000
7·000
7·000
•
40
in Tables 3.2 and 3.3 which yield a minimum bias estimator, the bias
matrices for the minimum bias estimator are given in Table 6.1 (in
the Appendix.) and for the least squares estimator in Ta'Jle
6.2
(in
the Appendix).
To show the advantage of using a minimum bias estimator rather
than the least squares estimator, the SMSE's for the minimum bias
estimator and the least squares estimator are compared for two
examples.
= SMSE
Let
JU; = SMSE
for the least squares estimator,
for the minimum bias estimator.
•
ab
+1
-1
-1
+1
-1
+1
+1
-1
-1
+1
+l
-1
+1
-1
-1
+1
= the
J
MB
vector of para-
A =
for the following two examples.
Example 3.1
For the irregular
•
The value of
JLS-J~ffi
3(25 - 2 ) fraction
(2,4,4), the SMSE's are:
is the increase in the 3MSS usir.g tne
Q~dinary
least squares estimator rather than the minimum bias e3timator.
By
•
41
gnlphing
JIB -
ab'Aal:!./rl
<\rn
versus the non-Ilegati ve quad:atic form
(ftgure 3.1) , one can see that the least squares
estimato!' should' be used only if
the sampling variance
(J2
ab'Aab
E'ven if the minimum bias estimator is
J~~
used in this situation, the maximum increase in
only
1/12
has a va!t,e less than
JIB
ove!'
is
2"/0 of the variance term V
0.125 .• which is approximately
Howevel', if the least squares estimator is always used,:.the increase
in
JIB
over
J MB
can be qui te large and in fact, is not bounded.
Example 3.2
For the irregular
6
3(2 -2)
fraction
(2,4;6)
(and also
(2,5,5) ), the SMSE's are:
•
For this example, the graph of
JIB - J
MB
(Figure 3.2) is negative if the value of
(J2/8 , and the maximum increase in
ever J the slope of
JL'3
J
MB
-
J
MB
JIB
-
J
MB
J
MB
versus
ab'Aab
over
J.r:.s
2
-ab' Aab/(J
_.
is less than
is 0.375·
How-
in Example 3·2 is t-wiee the slope of
in Example 3·1 ;,hi ch meB-'1S that the increase in
JL'3
over
is twice as rapid as in Example 3·1
All of the designs that allow a minimum bias estimator in
•
C!1apter 3 are for a full second order model (3. J_) being approximated
by a first order function (3.1).
If additional knowledge is known
•
42
10
5
-1
•
1
Figure 3.1
J
18
4
2
- J
MB
5
design (2,4,4)
for the
15
10
ab' Aab!rr
-
•
Figure 3·2
1
2
4
3
6
5
the 3(2 -2) design (2,1+,6)
J 18 - Jj'ffi for
2
•
43
about some of the
two-facto~
interaction terms so they can be
eliminated from the true model (3.2), then designs that allow
minimum bias
estimato~s
may exist where they did not exist using the
full second order model.
example, suppose an experiment is
Fo~
conducted using three chemicals (~.~., a, b, and c) each at two
concentrations and it is known that the chemical "e" ca.'1 not react
with chemicals "a" and "b".
The true model is assumed to be
cp
= Jl + a, + b . + c + (ab).. for i, j
and k = 1, 2 , but the
k
ijk.
~J
J
chemist wants to use the simpler approximating function
= Jl
"
""
Yijk
+ a
:''''
+ tOJ
+ c
i
k
basing the estimators on a six point design.
For the full second order model there are no
•
3(~-2) .desiggs that
allow a minimum bias estimator, but by eliminating two of the twofactor interactions minimum bias estimators do exist for the following
irregular fractions:
Design (1,1,2)
Design (1,2,3)
IDR
Fraction
1 2 3
IDR
1
A
+
A
+
C
+
BC
+
AC
+
ABC
+
3
roo
Fraction
1 2 3
+
AB
+
Fraction
+
+
2
Design (2,2,2)
BC
+
..
AC
The SMSE for these three designs are listed in Table 3.4.
of
J
LS
- J
MB
given in Figure
SMSE's increase with the
A a.'1d
•
B.
+
The graphs
3·3 indicate how the differences in
inc~easing
Note that if the
+
AB
size of the interaction between
interaction has much chance of
being non-negigible, then the advantage of using
estimation is much more readily apparent for the
than for either of the other two designs.
mip~mum
(1,2,3)
bias
design
•
44
Table 3.4
The
SMSE
t·he true model:
CPijk = iJ. +
Design
Estimator
(1,2,3)
(1,1,:2 )
(2,2,2)
SMSE
(9!2,i)ab f Aah + 4.500
Least Sq'-'.ares
J
Mi nimum Bi as
J
Least Squares
J
Minimillll Bias
J
Least Squares
f
J- S = (lO/3,l)a:
- Aab
- + 5.500
'
Minimum Bias
J
LS
MB
LS
ME
=
_. O!,i)abfAab + 5.250
. '-=
(lO!3,i)ab fAab + 5.500
=
.(3!,']'-)ab f Aab + 5.625
2
h
X3/~2)abfAab
=
ME
+ 5.625
Because Figure 3.3 does not give an indication of the absolute
•
levels of the SMSE's, the
indi\~dual
values of
each of the three designs in Figure 3.4.
J
are plotted for
L
Adopting the SMSE criterion
to choose the "best" of the three designs, for minimum bias
estimation, only the variance term V
(!.~.,
intercept) of the
J
ME
's
need be considered since all three designs yield the same bias
term B
(:!:..~.,
the
J
ME
smallest variance term
(1,2,3)
lines .• have identical slopes).
V for the
J
ME
's
is
Because the
5.250 for the design
it is the optimal choice which is contrary to the situation
The 5MBE
where ordinary least squares estimation is used.
design (1,2,3) has a smaller variance term
J
LS
for
for the
V than th"
designs (1,1:2) and (2'~J2)) as is also true for mir~mum bias
estimation.
•
(1,2,3)
(2,2,2)
However, the bias term
B
(~. £.,
the slope) for design
is larger than the bias term B for designs
In the presence of a non-negligible
Ai3
(1,1,2)
i:~teTaction,
and
the
•
10
5
•
,,\ or (~z.?l..------:
Desig~ U:.> h ~ - -
____~....""'o..::...;::;.:.------,....---.,....---..,..---...,..--2
3
ab I Aab/
5
-5 ,
Figure 3.3
•
J
LS
- J
MB
of 3(~-2) designs for the true model:
'llijk = 11 + a i
+ b j + 'Ok + (ab)ij
,l
•
46
J
•
1
Figtlre 3.4
•
2
3
5
The SMSE's of 3(~-2) designs for t~e true model:
•
larger bias term (1.35 times as large) would make the desi~~ (1,2,3)
less appealing than either of the other two designs when ordinary
least squares estimation is used .
•
•
•
48
4.
SUMMARY AND CONCLUSIONS
The Karson, Manson and Hader method of minimum bias estimation
has been applied to the classical experimental design model with
factors having a finite number of levels.
For this situation, the
minimum bias estimator gives the minimum bias term B for the
which, in general, is smaller than the bias telw
squares estimator.
SMSE
B using the least
The variance term V for the minimum bias
estimator is greater or equal to the variance term V for the least
squares es timator, and equali ty occurs if and only if the two
estimators are equal.
The minimum bias estimator does become the
least squares estimator on the subclass of designs satisfying the Bax-
•
Draper condition which are contained in the class of all designs
allowing the existence of a minimum bias estimator.
For
2r
factorial experiments, fractional factorial designs
were investigated when a second order model was approximated by a
first order function.
A sufficient condition for designs to allow the
existence of a minimum bias estimator is that the fractional factorial
design must be of resolution
rv.
For regular fractions, resolution
IV is also a necessary condition for the design to satisfy the BoxDraper condition and therefore, allow a minimum bias estimator.
Irregular fractions that are constructed by combining resolution IV,
regular fractions also satisfy the Box-Draper condition.
However,
examples of irregular fractions that are constructed from regular
fractions of resolution less than IV and allow the existence of a
•
minimum bias estimator are given for
6
3(2 - 3 )
irregular fractions
•
50
5.
Adde~~, S.
REFERENCES
1961. Irre~~lar fractions of the 2
experiments. Technometrics 3:479-496.
n
factorial
S. 1972. Recent deYelopments in the design of factorial
experiments. Jour. Amer. Stat. Assoc. 6'1 :103-111.
Adde~'1,
Box, G.E.P. and N.R. Draper. 1959. A basis for the selection of a
response surface design. Jour. Amer·. Stat. Assoc. 54 :622-654.
Cote, C., A.R. Manson, and R.J. Hader. 1973. Mir~ill4m hias
approximation of a general regression model. Jour. Amer. Stat.
Assoc. 68:633-638.
Graybill, F. A.
Volume 1.
1961. An Introduction to Li!"ear Statistical Models,
McGraw-Hill Book Company, Inc., New York, New York.
Graybill, F.A. 1969. Introduction to Matrices With Applications in
Statistics. Wadsworth Publishing Company, Belmont, California.
•
John, P.W.M. 1971. Statistical Design ani Analysis of Experiments.
The Macmillan Company, New Yor'k, New York.
Karson, M.J., A.R. Manson, and R.J. Hader. 1969. Minimum bias
estimation and experimental design for response surfaces.
Technometrics 11:461-475.
Margolin, B.H. 1969a. Results on factorial designs of resolution IV
n
for the 2 : and 2 n3 m.series. Technometrics 11:431-444.
Margolin, B. H. 1969b. Resolution IV factional factorial designs.
Jour. Royal Stat. Soc. B, 31:514-523.
Paku, G. A., A. R. M!L~son, and L. A. Nelson. 1971. Minimum bias
estimation in the mixture problem. Mimeograph Series no. 757,
Institute of Statistics, No:'th Carolina state U!"~7Grsity, Raleigh,
North Carolina.
Parish, R. G. 1969. Minim= bias approximation of models by polynomials of low order. M:.meograph Series no. 627, Ir~sti tute of
Statistics, North Carolina State Uni-.-ersity, Raleigh, North
Carolina.
.
Rao, C.R. 19'71. Generalized Inverse of Matro.ces a::d Its Applications.
Jop_'1 Wiley and Sons, Inc., New York, New York.
•
Rhode, C.A. 1965. Generalized inverses of partitioned matrices.
Jour. Soc. Indust. Appl. Matb.. 13 :1033-1035 .
•
51
Searle, S. R. 1971. Linear Models.
New York, New York.
John Wiley and Sons, Inc.,
Webb, S.R. 1968. Non-orthogonal designs of even resolution.
Technametrics 10:291-299.
•
•
•
53
6 ,_1__~~.el",-Gc..ca:.:;l::.....:~.:.:lo:::m:.:e:.:;n:..:t-=-s
H.. for i = 1, 2 ~d j = 1, 2 , that
m l.J
1
H.. = - E x.. x~. can be shown to be equivalent to
l.J
m h=l -m-aJ
The regional moments
are defined by
Since the matrices
... ,
~J
m
=
and
are
x I • JI
Xj = [~b , ... , ""1IlJ
-
h~l ~~;j
X!1. = [xli
'
-
the matrix product
,
is
Therefore, the regional moments are
·1
HJ.' ' = -m XIX.
1. J
J
6:-2
Va"ia..~ce Term V
The variance term
equal to
•
V
in (2.19),
V = n trace (H T/T).
11
X1T'TX'1
=
r'U~''''"'
L~T/~ll'
is
Because
."
.. ,
o ••
'U~"'-'
l
~/Tx
J
x'
==ml
J
==ml
m
the trace (X1T/TXi)
V
= gm
is
Tr(X1T/TXi)
= E X!lT/Tx. l .
i=l-l.
-1.
Tr(X T/TX/) = g Tr(X!X TIT) , or
11m
.Ll
Hence,
V = n Tr (HllT/T) .
6.3 Theorem 2.2
The minimum bias condition (i.
e.,
G~
only if the follo101ing two condi tions hold:
•
2•
estimable) holds i f a..'1d
•
Proof
By Rhode theorem (1905),
then
+
if
(I:H~~12)Q-Q; [Q-Q
(H~iH12 - Q-C)D-D].
(I:H~~12)Q-Q ;
+ (Q-C -
H~~12)D-C/(Q-Q
- I);Q-C
The minimum bias condition holds if and only
(I:H llH12 ) , or i f and only i f
Q-Q +
- Q-C)D-C' (I - Q-Q) ; l , and Q-C + (HiiH12 - Q-C)D-D ;
(H~~12
H~~12
Or,
if and onl-v if, the following two condi tions hold;
•
6.4 Theorem 2.3
If the minimum bias estimator
£1; TIl*
(-L
-:L) - *
* *)-L_ *
T' ; (Xl Xl A l + HI J:H12 - MI J.M12 D X F
2
*( * *)-L_ *) ,atld
F; ( I - Xl Xl Xl
"1
*
exists, then
where
* •
2 FX 2
D; X
.'
Proof
Since the minimum bias estimator exist;
•
singular matrix.
Ii'
then the value of
(X' *X*)-
D.
B; X
**
2 X2
,
c;
*
Xi X*
2
i(.
*
Q ; X' X
1
a".d
1
is a non-
D; B - C/Q-1c,
can be expressed in terms of
Q,B,C, and
•
55
l.
* + Q-1CD - C'Q_1A ' * - Q- '-CD
1
* - H::1!
-l. (-L *
- *) •
X'
D C'Q A'
+ D X'
1112
1··
2
T' ; Q--X '
1
•
1
2
For the bias matrices in Tables 6.1 and 6.2, the following
notation is used.
DIAG(G , ..• , G ) ; a matrix with the submatrices
l
N
G , ••• , G on the
1
N
diagonal and the remaining elements are zero.
Matrices:
+1
-1
-1
+1
-1
+1
+1
-1
-1
+1
+1
-1
+1
-1
-1
+1
A;
Bl
•
;
-2
9
22
-6
-6
+6
-6
+5 .+5
-5
+5
-5
-6
+5
+6
-5 -5 +5
-6
-6
+6
-0
~
+5
';-5
-5
-6
+5
+5
-5
+6
-5 -5 +5
36
B2
2
; 9
•
56
~
B3
C
-2
;
9
;
-2
9
Constants:
•
•
Table 6.1
Design
38
-6
-6
+6
-6
+5
+5
-5
B4
;
-2
9
24
-6
-6
-I{)
-6
+5
+5
-5
-6
+5
+5
-5
-6
+5
+5
-5
-I{)
-5
-5
+5
-I{)
-5
-5
+5
"1
+1
+1
-1
+3
-3
-3
+3
0
0
0
0
-1
+1
+1
-1
0
0
0
0
-1
+1
+1
-1
0
0
0
0
+1
-1
-1
+1
1
d-- 2
2 '
10 ,
a; -
9
b - -
- 9 '
D ; -1
9
and
e
1
2
Bias matrices Q for the minimum bias estimator
Minimum Bias Matrix
3 (24-2)
DIAG(A,A,A,A,A,A)
3 (2 5- 2 )
2 DIAG(A,A,A,A,A,A,A,A,A,A)
3 (26 -2)
4 DIAG(A,A,A,A,A,A,A,A,A,A,A,A,A,A,A)
3(i-3 )
4 DIAG(A,A,A,A,A,A,A,A,A,A,A,A,A,A,A)
•
57
Tahle 6.g
Bias matrices Q for the least squares estimator (the word
lengths of the IDR wr~ch identify the designs are listed
in parentheses under each matrix)
bA
aA
aA
bA
aA
aA
A
aA bA
eA
A
aA
bA
bA
A
bA
aA
aA
aA
aA
bA
(2,2,4)
(1,3,4)
•
bA
aA
A
bA aA
bA
bA
aA
bA
(2,3,3)
(1,4,5) - 2 DIAG(aA,aA,aA,aA,A,A,A,A,A,A)
(2,3,5) - 2 DIAG(aA,A,A,A,A,A,A,aA,aA,aA)
(2,4,4) - 2 DIAG(aA,A,A,A,A,A,A,A,A,A)
bA
aA
A
bA
aA
A
aA
bA
2
bA
aA
bA
aA
A
aA
bA
A
, 2
aA
A
A
A
bA
bA
aA
A
A
A
A
(1,3,4)
•
(2,2,4)
•
58
Table 6.2
(Continued)
aA
aA
bA
aA
A
bA
2
A
bA
A
aA
bA
aA
, 2
aA
A
A
A
cA
aA
A
aA
A
-
•
bA
L-
(1,4,5) - 4 DIAG(aA,aA,aA,aA,aA,A,A,A,A,A,A,A,A,A,A)
(1,5,6) - 4 DIAG(aA,aA,aA,aA,aA,A,A,A,A,A,A,A,A,A,A)
(2,3,5) - 4 DIAG(aA,A,A,A,A,A,A,A,A,aA,aA,A,aA,A,A)
(2,4,4) - 4 DIAG(aA,A,A,A,A,A,A,A,A,aA,aA,A,aA,A,A)
(2,4,6) - 4 DIAG(aA,A,A,A,A,A,A,A,A,A,A,A,A,A,A)
(2,5,5) - 4 DIAG(aA,A,A,A,A,A,A,A,A,A,A,A,A,A,A)
(3,3,6) - 4 DIAG(aA,aA,A,A,A,aA,A,A,A,A,A,A,aA,aA,aA)
(3,4,5) - 4 DIAG(aA,aA,A,A,A,aA,A,A,A,A,A,A,A,A,A)
(4,4,4) - 4. DIAG(A,A,A,A,A,A,A,A,A,A,A,A,A,A,A)
•
aA
-
•
59
Table 6.2
(Continued)
bA
aA
bA
aA
bA
aA
bA
aA
bA
aA
A
aA
A
aA
bA
bA
c
bA. .
aA
A
4
aA
bA
aA
aA
A
A
cA
, u
A
bA
aA
A
A
A
A
A
A
A
A
•
A
A
(1,3,4)
(2,3,3)
bA
aA
aA
A
A
A
aA
;
A
, 4
A
A
aA
aA
A
•
bA
aA
A
A
(3,3,4)
•
60
Table 6.2
(Continued)
3Ci-3 ) Designs
Bl C C
A
A
C'
C'
•
4
dC
A
bA
aA
c'
A
bA
aA
A
A
A
, 4
aA
A
A
aA
D'
aA
A
A
dC'
D
A
A
A
cA
(2,2,3,3,4,5,5)
•
B2
A
A
C'
D'
•
CCD
aA
A
A
A
A
A
(2,2,4,4,4,4,4
•
61
Table 6.2
(Continued)
r-
aA
bA
3C'
4
C'
A
A
bA
-C'bA
bA
A
aA
bA
aA
C'
cA
aA
bA
aA
C'
A
aA
dC'
A
cA
bA
A
•
A
, 4
dC'
•
bA
aA
aA
bA
bA
aA
aA
bA
A
-
-
bA
bA
aA
C'
C C dC
-C
-C'aA
bA
aA
3C'
•
'B4
dC
bA
aA
•
-
3C C dC
3C C
B3
dC'
A
eA
(2,3,3,3,3,4,6)
aA
-
A
(2,3,3,3,4.. 4,5)
-
© Copyright 2026 Paperzz