\
ESTIMATORS BASED ON A CLOSENESS CRITERION
OVER·A llESTRICTED PARAMETER SPACE
FOR THE GENIRAL LINEAR MODEL OF FULL RANK
BY
David C. Korts
This work was supported by NIH Training Grant No.
T01-GM-00038 from the National Institute of General
Medical Sciences.
Institute of statistics
Mimeograph Series No. 741
Raleigh - 1971
iv
TABLE OF CONTENTS
Page
.....
·.·....
LIST OF FIGURES . . . . . .
. . · . -. . . . . . . . . .
1. INTRODUCTION
......
· . . · . . . . . . .. .
1.1 Motivation
• • . • • •
·.........
1.2 First Procedure ••
••••
· . . . . . .. ...
1.3 Second Procedure • • • • • •
1.4 Full Model • • • • • • •
·........
1.5 Development • • •
·....
·.
2
2. ESTIMATOR FOR THE SINGLE S. FIXED a CASE · . . . . . . . .
2.1 General Structure •
.••
2.2 First Maxi-min Solution •
·......
LIST OF TABLES •
3.
ESTIMATOR FOR THE SINGLE S.
3.1
3.2
3.3
4.
0
222
e[oO.ol] CASE
5.
· . . . ·· .. . .
GENERALIZATION TO THE CASE OF P S'S · '. . . . . . . . . . .
4.1 General Structure • • • • • • • • • • • • •
· . .. .. ..
4.2 Extension: Fixed 2 Case • • • • • • • • •
4.3 Extension: a2e [o~.ai] Case ••
• ••
·....
AN ALTERNATE ESTIMATOR . . .
...........
5.1 General Structure • • •
..
·........
5.2 Form of the Estimator: KS • •
. . . . .. .. ..
5.3 Properties of T(K.a 2) for K Fixed •
·...
5.4 Solution for K
•••••
SOME PROPERTIES OF THE ESTIMATORS · . . . . . . . . . . . .
6.1 Dependence on Ki • • • • • •
·....
6.2 Bounds on Ki
·· ..
6.3 Bias
••••••
• • · . . . . . . .
6.4 Variances and Covariances •
·.., ...·....
6.5 On Limiting Properties
·.
A NUMERICAL EXAMPLE . . . . . ... .
7.1 General Problem •
·..
•
7.2 Fixed a2 Case • • . . . . . .
·
.
.
7.3 a2e[a~.ai] Case.
...·. ·.
. . ·· ..
.'
'A'
6.
•
7.
•
•
t,
•
viii
1
1
2
6
7
7
9
9
10
16
General Structure • • • • • • •
Minimization of ~ with Respect to
Second Maxi-min Solution
0
vi
•
16
16
18
25
25
27
29
31
31
31
33
36
40
40
40
41
41
41
43
43
44
52
v
TABLE OF CONTENTS (Continued)
Pag~
6.
SUMMARY, CONCLUSIONS, AND SUGGESTIONS FOR FUTURE RESEARCH
8.1
8.2
8.3
SUmDlary • • • • • • • • • • • • •
Conclusions • • • • • • • • . • •
Suggestions for Future Resea~ch •
8.3.1
8.3.2
8.3.3
·...........
·.......
·..
...
X'X Matrix of Less than Full Rank
Transformation of Parameters •.
Alternate Decision Procedure •
9. LIST OF REFERENCES •
10. APPENDICES • • • • •
·.
72
·...·....·.
· .· ..
72
74
75
75
75
76
77
·..
10.1 Appendix A - Computer Program to Calculate K for 0 2
Fixed
•
10.2 Appendix B - Computer Program to Calculate K for Type A
Estimator
•
• •
•
10.3 Appendix C - Computer Program to Calculate K for Type B
Estimator
················ .
· ······ ······ .· .
················ .··.
78
79
83
88
vi
LIST OF TABLES
Page
7.1. Values of
~
and
correspo~ding
probability increases:
Pr[M-€~Ke~M+€]-Pr[M-€~e~M+€] for fixed a 2 case -a2 = . 0001 . . . . . . . . . . . . . . . . . . . .
7.2. Values of ~ and correspo~ding probability increases:
Pr[M-€~Ke~M+€]-Pr[M-€~e~M+€] for fixed a 2 case -0'2 = .001
. . • . . . • • • . . . . . . . . . • .
7.3. Values of K and corresponding probability increases:
Pr[M-€~Ka~M+€]-Pr[M-€~e~M+€] for fixed a 2 case -0
2 = .01 . , . . . . . . . . . . . . . . . . . . .
·...
45
·...
46
·...
47
7.4. Values of K and corresponding probability increases:
Pr[M-€~KS~M+€]-Pr[M-€~S~M+€] for fixed a 2 case
a 2 = 0.1· • • • • • • • • • • • • • • • • • • • • •
7.5. Values of
•
~
and
correspo~ding
probability increases:
Pr[M-€~Ke~M+€]-Pr[M-€~e~M+€] for fixed a 2 case -a 2 = .05 • • • • • • • • • • • • • • • • • • • • •
7.6. Values of
~
and
48
correspo~dingprobability
,
...
increases:
Pr[M-€~Ke~M+€]-Pr[M-€~e~M+dfor fixed a 2 case -0 2 = 1.0 . . . . . . . . . . . . . . . . . . . . .
7.7. Values ofK and corresponding probability increases:
Pr[M-€~Ke~M+€]-Pr[M-€~S~M+€] for fixed a2 case -02 =
10.0
... . . . . . . . . . . . . . . . . . .
~
7.8. Percent increase in
M=lO and a 2 fixed
Pr[M-€~K8~+cl
49
A
over Pr[M-€~e~M+€] for
• • • • • • • • • • • • • • • •
50
..
..
51
52
7.9. Values of K, probability deviations, and BLUE probabilities
for Type A estimator -- a 2 e;[.0001, 10.0] • • • • • • • ••
54
7.10. Values of K, probability deviations, and BLUE probabilities
for Type A estimator -- a 2 €[.001, 1.0] • • • • • • • • ••
55
7.11. Values of K, probability deviations, and BLUE probabilities
for Type A estimator -- a 2 €[.Ol, .1] • • • • • • • • . ••
56
7.12. Values of K, probability deviations, and BLUE probabilities
for Type A estimator -- a 2 €[.0001, .01] • • • • • • • • •
57
7.13. Values of K, probability deviations, and BLUE probabilities
for Type A estimator -- a 2 e;[.l, 10] • • • • • • • • • • •
58
7.14. Values of K, probability deviations, and BLUE probabilities
for Type A estimator -- a2 €[.03, .07] • • • • • • • • • •
59
e
vii
LIST OF TABLES (Continued)
Page
7.15.
7.16.
7.17.
7.18.
7.19.
7.20.
7.21.
7.22.
7.23.
7.24.
7.25.
7.26.
Values of K, probability deviations, and BLUE
probabilities for Type A estimator - - O' 2 e;[1.0, 10.0]
Values of K, probability deviations, and BLUE
probabilities for Type A estimator -- 0'2 e;[ 10, 20]
··
60
.•·
61
Values of K, probability deviations, and BLUE
probabilities for Type B estimator - - O' 2 e;[ .0001, 10.0]
Values of K, probability deviations, and BLUE
probabili ties for Type B estimator - - O' 2€ [ .001, 1.0]
·
62
··
63
Values of K, probability deviations, and BLUE
probabilities for Type B estimator - - O' 2 €[.01, 0.1]
64
Values of K, probability deviations, and BLUE
probabilities for Type B estimator - - O' 2 e;[ .0001, .01]
65
Values of K, probability deviations, and BLUE
probabilities for Type B estimator -- O' 2€ [.1, 10]
66
Values of K, probability deviations, and BLUE
probabilities for Type B estimator - - O' 2e;[. 03, .07]
67
Values of K, probability deviations, and BLUE
probabilities for Type B estimator - - O' 2€ [1, 10]
68
··
..··
Values of K, probability deviations, and BLUE
probabilities for Type B estimator __ O' 2€ [10, 20]
Percent increase in probability over minimum BLUE
probability for Type A estimator -- M = 10 • • •
Percent increase in probability over minimum BLUE
probability for Type B estimator -- M = 10 • • •
69
.. . .
71
71
viii
LIST OF FIGURES
Page
2.1.
Partition of K-H space into the four regions A, B, C, and
D • • •
3.1.
5.1.
. ........
12
The two possible relationships between ~(M,cr6,K) and
~ (M, cry ,K) • • • • • • • • • • • • • • • • • • • •
24
Typical representation of T(K,cr 2) for K > 1 - £ and fixed •
36
M
1.
INTRODUCTION
H6tivation
1~1
The standard approach to estimating the parameter vector
~
in the
general linear model is based on a criterion of minimum variance unbiasedness.
That is, the estimator chosen is the one which gives rise
to the smallest diagonal terms in its variance-covariance matrix,
within the class of all linear unbiased estimators.
While this require-
ment of unbiasedness is. in a general sense, quite reasonable. there
can be certain experimental situations in which it could be less
desirable than a criterion of :lcloseness H of the estimator to the
parameter being estimated.
That is, an experimenter might be less interested in unbiasedness
than in obtaining an estimate within a certain distance of the true
parameter.
In particular, given the simple (one 8) linear model:
with
E
i
tv
2
N(O,cr )
Ei and E independent for j
j
+ i.
The experimenter might feel that if he could get an estimate
of 8 which was within ±
£
of the true value. he would be satisfied.
Any estimate in the interval [8-£, 8+£J is all the same to him, but
he strongly desires one in this interval.
While we can never
guarantee that an estimate be contained in this interval, we can, for
fixed
£,
consider the probability:
2
for different linear estimators B of 8, and use it as our measure of
"closeness" of an estimator to the parameter estimated: the greater the
probability, the greater the "closeness".
Hence if one estimator is
found to be closer to 8 than another estimator, the former will be
referred to as "better" than the latter.
Since we are considering linear estimators only we have:
B
= Cly
where Q is an n by 1 vector of constants, and Y is the n by 1 vector
of observations.
Then the distribution of B will be normal with mean
I
2
th component
QX
8, and variance Q IC
_0 • where X is the vector whose i
is X.•
~
Symbolically:
We will proceed by first considering the single 8 case, for two
different formulations of the problem, and finally extending the results
to the general p 8 case.
1.2
First Procedure
Webster [4] considered this criterion, and referred to it as an
A
"e:-neighbot,lrhood" criterion.
He considers ).11 better than ).12
estimating V if, for fixed e::
Webster compares two estimators, Xl and X where:
2
~or
o>
0
wit h correlation p,
He determines that X is a better "e;-neighbourhood" estimator of
2
j.l
than
Xl when:
In other words, knowledge of the ratio of the variances, as well as the
difference between the means, is necessary (assuming e; as given), before
X2 can be said to be better than Xl'
This requires a good deal of
knowledge about the parameters at the outset,
Webster's work points up the difficulty of working with biased
estimators: their properties tend to depend on the parameters more than
do those of unbiased estimators,
Pr[B - e; 1. B 1. B + e:]
For example defining
2
~(B,o
,B) =
where e: is fixed, we could have two dis tinct
estimators of !h B and B , which have the property that:
l
2
For all estimators B except B
= Bl ,
for all estimators B except B
= BZ'
at the point (Bl,Oi)
while at the same time:
That is, B is the best estimator
l
in the parameter space, while B is best at the
Z
2
point (I3 Z'OZ)'
Dantzig (1) alluded to this problem by proving that there
exist~
no test of Student's hypothesis which has a power function independent
4
2
of o.
This relates to our discussion through the correspondence between
confidence intervals and (two-sided) tests, where our estimator can be
thought of as defining the center of a confidence interval for
width 2g.
e of
Following Dantzig's work, Stein (3) developed a two sample
sequential procedure for establishing fixed width confidence intervals
independent of
0
2
, at prescribed a levels.
Since we wish our closeness measure to be as large as possible,
for
fixed, whatever the "true" values of e and
g
0
2
we might employ
,
a decision rule which picks as estimator the B which maximizes the
infimum of:
over the parameter space.
g ~
infimum of Pr[e - ~ <
S < +
That is, for each candidate B we find the
B~ S +
g)
over the parameter space: 0
< 0
2
<
+
~,
and then we choose the B which produces the largest of
~;
these infimums.
This procedure, however, will not work, because each
of the above mentioned infimums will be zero.
For any biased Band
0
2
fixed, it will be seen from the proof of Lemma 2.1 that the probability
Ie I -+
decreases to zero as
e-
g
~ E [B) ~
while if E[B]
zero as
0
2
-+
<
e+
~..
On the other hand, for S fixed, if
then the probability will go to zero as
g
e -
+
g
or E[B]
>
e +
g
0
2
-+~,
the probability will approach
O.
Hence the parameter space must be restricted to make the criterion
nontrivial.
use are:
M,a
~
Restrictions which suggest themselves, and which we shall
Ie I ~ M and
, and
0i
either
are known.
0
2
fixed and known or
0
2
g
2
2
[a 0' 0 ]
1
where
5
Under these restrictions on the parameter space we can now state
the First Decision Procedure as:
Choose as an estimator of S any B
which maximizes the minimum of the closeness probability over the
restricted parameter space.
That the estimator B so determined does
exist, and is unique, will be seen by actually finding, in each case, a
unique B which satisfies this formulation.
It should be pointed out that from the point of view of the
experimenter, the above restrictions on the parameter space can, on
some occasions, be quite reasonable.
If parameters have occurred
repeatedly in similar contexts, this may occasionally give rise to a
general knowledge, or "feeling" of their magnitude.
In fact, some
experimenters profess that in some of their work, they have complete
2
knowledge of cr.
Aside from this, sometimes mathematical and/or
physical constraints exist which restrict the parameter space.
If,
however, no such information or conditions exist, the estimators
developed here can not be used.
Finally, a completely equivalent procedure would have been to
2
define the loss function L(B, S,cr ) as:
L(B,
2
S,a )
=
{OifS-E~B.$.S+E
1 otherwise
where
~
is understood to be fixed.
Then the risk function, R(B,e,cr 2 ) is:
and the well known minimax decision procedure applied to this risk
function gives exactly our First Procedure.
6
1.3
Second Procedure
One property of the estimator derived from the first formulation
~s
that this estimator is not necessarily uniformly closer to S than the
BLUE (best linear unbiased estimator) is over the restricted parameter
space.
A second decision procedure is presented in this section which
is without this potential drawback.
This procedure is to choose as the estimator of S the B which
maximizes the minimum of:
Pr[S -
€ ~ B~
S+
€]
-
Pr[S -
€ ~
over the restricted parameter space.
the BLUE.
A
S
<
S + €]
Here, and henceforth, S will mean
As with the previous estimator, the existence and uniqueness
of a B which satisfies this criterion will be seen by actually deriving
an appropriate B, and seeing that it is unique.
Here, as before, a formal decision procedure could have been used
with Loss Function:
if
where
a-
A
€ .~
a< a+
€
otherwise
={1
if
a - € ~ B ~ a+
o otherwise
giving risk function R(B,
a,
2
a ):
€
7
E: < a < S+'E:] - Pr[S - E:
~
B ~ a + €]
The minimax solution of this risk function then is exactly our
Second Decision Procedure.
1.4 Full Model
To this point we have considered only the simple linear model with
one
e.
For the general linear model of full rank:
y
where
= X! + E
~
where .....
E
~
2
N(O,Io
)
.....
is a p dimensional vector of parameters, we will
separately each component of t4e
results.
~
estim~te
vector, and then consolidate the
That such an approach is reasonable is seen from the closeness
criterion itself: it is concerned with individual a's.
That is, if we
have p different a's to estimate, we will get from the experimenter p
values of
€,
such that we have the p closeness measures:
Each individual E is obtained from the experimenter, and is based on
i
his attitudes toward a
i
only.
One additional restriction must be made on the estimators in this
general case: that the expected value of Bi not depend on Sj for j
~
i.
This restriction should appear intuitively reasonable.
1.5
Development
The development of the remaining chapters is as follows: the single
2
a, fixed 0 case is considered in Chapter 2. Chapter 3 contains an
8
extension to the
procedure.
a~€!a~,
ail
case for the estimator of the first
In Chapter 4 this result is generalized to the full
p-dimensional model.
The complete development of the second procedure
estimator through the general model is considered in Chapter 5.
6 presents properties of the derived estimators.
Chapter
Chapter 7 presents a
numerical example, and Chapter 8 contains the summary, conclusions, and
suggestions for future research.
9
2.
ESTIMATOR FOR THE SINGLE S, FIXED 0
2.1
2
CASE
General Structure
Consider the linear model:
Y
where
= XS +
!
E
is an n by 1 vector of observations
X is an n by
vector of fixed values
~
! is an n by 1 vector of random variables
such that
E
'V
N (Q,
2
10 )
We are assuming that 0
that
IS I
M.
<
2
= Q'Y =
and where
is known, and that a value M is known such
Define iP as:
iP(S,o ,Q) = Pr[S where B
2
€ ~
B~ S +
€]
n
[~, c_, , •• , C ,]
-1
L
n
Y
1
Y2
=i=l
L
CY
i'i
and X are though of as being constants, and hence not
2
listed as arguments of CPt We have, however, listed 0 as an argument
of
~
€
although in this case it also is fixed, since in future chapters
2
0 will not be fixed.
Our decision procedure is to choose as the estimator of S the B,
or ~quiva1ent1y the
£, which maximizes the minimum of cp(S,02,Q) over
the restricted parameter space.
The restricted parameter space is the
set of ordered pairs: (02,S) where the first quantity assumes only one
10
value, and where the second quantity may assume any value from -M
to Minclusive.
2.2
First Maxi-min Solution
The desired estimator is determined from a lemma and two theorems.
The lemma establishes that for any fixed.£, ~(8~a2,C) will assume a
minimum when
IBI = M.
The first theorem then shows that the.£ which
2
maximizes HM,a ,.£) must be of the form: k(.!'X)-lX, or equivalently:
B
= kS
where S is the BLUE of 8, for some constant k.
Finally the
second theorem gives the value of k which maximizes ~(M,a2,k(X'X)-lX).
Lemma 2.1:
For 0'2 fixed, 181 ~ M, and for any fixed.£, ~(8,a2,C)
is minimized when 8
Set K
= C'X,
and H
=
IMI.
= .£',£
Now:
= Pr [8-e:-K8 < Z < 8+e:-K8]
lHa 2
where Z
~
-
-
~
N(O,l).
Note that H -8,0'2 ,C)
!=
Hs ,i ,C) and that the width of the
Z interval is :
2e:
IHa2w hi ch does not depend on f3.
§~l-K)
v'Ha 2
f3
•
= -, MI.
The midpoint of the interval is
Hence as 181 increases ~ decreases and ~ is minimized when
QED
11
It will be seen that the estimator which maximizes
identical to the one which maximizes
we consider only S
~
for S = -M.
~
for S
M is
~
Hence in the following
= M.
We now must determine what vector:
f
will maximize
~(M,o
2
,f).
The
most straightforward procedure might be to use the Lagrangian method of
maximization to determine C ' i = 1,2, ••• , n. This was, in fact,
i
the way the initial solution was obtained, but a different approach is
presented here.
~
Our method is to first show that the B which maximizes
must be of the form kS, where k e [0,1]; and then to use the Lagrangian
method to determine the exact value of k.
The purpose for this perhaps
unusual approach is that it leads naturally into the approach necessary
2
2
2
for the more general o-e[oO' 01] case.
Theorem 2.1: The f which maximizes ~(M,02,f)
for M, e, and
02
= Pr[M-e
~ B ~ M + e]
fixed, is of the form:
A
or
B
= kS
for some ke[O,l].
Proof:
If M ~ e then the interval [M-e,M+E] contains zero, and
° gives Pr[M-e B M+e] 1 which is clearly a
Since B = ° can be trivially written in the form:
~
the estimator B =
maximum.
for k
= 0,
=
we can limit our consideration to e
Also, since
•
~
~
<
M.
is a function of f only through K
= C'X
we will proceed to seek the point (K,H) which maximizes
~.
and H
= C'C
- -'
It will
be seen that the determined point will correspond to a unique
£.
The
approach will be to consider the space of all possible points: (K,H),
12
referred to henceforth as K-H space, and to eliminate from consideration
any point which gives rise to a smaller value of
point.
~
than does some other
The boundaries of this space are given by the Cauchy-Schwartz
inequality:
Consider the partition of K-H space given in Figure 2.1.
K
Figure 2.1. Partition of K-H space into the four regions A, B, C, andD
13
2
A is the set of points (K,H) such that H = K (X'!)
B
is the set of points (K,H) such that H
<
-1 for KE;[O,l].
(X'X):""l and K
<
/if/XiX
C is the set of points (K,H) such that H > (X'X)-l and K :/: 1.
D
is the set of points (K,H) such that H > (!,!)-l and K = 1.
Now in Lemma 2.1 we saw that ifC :/:.Q. ~(M,a2,C) could be written
as the probability of a standardized normal variable being within an
. 2e;
M(l-K)
interval of width: 1,2 and center: ~
Ha
-1 Ha
Hence for any fixed 1I >: (X'X) , any point (K,H) in region C can
be eliminated in favor of the point (l,H) in region D.
for any fixed H ~ (!'X)
-1
Similarly,
, any point (K,H) in region B can be eliminated
in favor of the point (IH/X'X, H) in region A.
Finally, any point
in region D can clearly be eliminated in favor of the point (l,(X'X)-l)
in A.
We have thus reduced the possible points to points on the boundary
H = K2 Qf'X) -1 for which Ke; [0,1].
However, we know that the Cauchy..,.
Schwartz inequality becomes an equality if and only if one vector is a
multiple of the other.
So C
=
a! for some a.
Hence
with k
= a(X'X)
But
KM
= E[B] = kE[8] = kM
so we must have that k
= K.
Hence the C which maximizes ~(M,a2,c) can be written in the form:
C'
= KQf'X)-l X:' or equivalently, B = KS f or some value of Ke;[O,l].
QED
14
In order to determine the exact value of K which maximizes
~(M,a2,KX(X'X)-1) we have the following theorem:
A
Theorem 2.2:
If
2
-
S '" N(H,da ),0
<
E < :H then
A
<P
·-Fr [M'-E .i K/3 .i M + E] is maximized when:
d.... 2 In [__
M+ E]
where 0 = ~
K = -1 + /1+21:IJ
o
~
~E
and 0 < K < 1.
Proof:
- M+E
_-1:L
KvW
<P
•
= pr[M-E-KM
<
Z
K/dc;! -
<
-
M+E-KM]
K/do 2
M-E
vW
Ii
= ~xp[- "2]dw1
--co
l
_-1:L
K~ ~
/2;exp [- ~ ]dw
-co
So
1
-d<P = -{exp[dK
Ii;
1 M+E
2
M+E
1 M-E
2
M-E
....-(--- --M) ] [)] - exp[- - - ( - -M) ][-( ~]}
2da 2 K
K2~
2da 2 K
K2/da2-
Setting this equal to zero and solving for K we get:
exp[- ~(l -1)] =:H-E
-H+E
Kdi K
which gives:
where 0
K
2
d ....
= _IJ_
:HE
=
-1
+ v'l+26
0
M+
In [......£]
M-E
We choose the plus sign since otherwise K would be negative.
That
this value of K must correspond to a maximum in [0,1] is evident since
for K > 1
<P
is less than
<P
at K= 1, and also
<P
=0
when K
= O.
QED
15
Note that if we had chosen S
= -M
as the value of S which
minimi~es
4'(S,(12,C) for'.£ fixed, we would have gotten exactly the same results
1
-M+e
as above since -M In[_M_e)
1 1 M+e:
=M
n[M_e)
•
Hence the maxi-min solution for the single S, fixed (12 case is:
B
= KS
where
K
= -1 + Ii+"2"8
o
2
o=~
Me:
In[M+e:)
M-e:
16
222
ESTIMATOR FOR THE SINGLE 13, a do 0'
3.
3.1
0
1 ] CASE
General Structure
Suppose we have the linear model:
Y is an n by 1 vector of observations
where:
X is an n by 1 vector of fixed values
E is an n by 1 vector of random variables
stich that.§.
tV
N(Q, 10:2).
Here 13 and
02
are unknown parameters, but are known to lie in the
bounded intervals:
Ysl
< M
As before, our purpose is to estimate 13, and our criterion is to
~
choose as an estimator of
the B which maximizes the minimum of
H 13, a -2 ,C) = Pr [S-e; .::. B .::. f:3+e:]
over the restricted parameter space:
3.2
Since
~
lsi .::. M,
2
2
2
a e: [a 0' all.
Minimization of ~ with Respect to 13 and
depends on Q only through K and H, as
2.1, we henceforth write
~
02
defined in Lemma
as HS,a 2 ,K,H).
In Lemma 2.1 we saw that, for a 2 ,K,and H fixed, ~,assumed a
minimum when
in [0 2 , of]
o
1
S=M.
Hence we are concerned with what value of
02
will minimize ~(M,a2,K,H) for K and H (and of course, M)
fixed.
The following two lemmas prove that for any fixfad Kand H.
will be minimized with respect to 0 2 at either o~.
cri
<I>
or at both
simultaneously.
Lemma 3.1:
For K t[l - ~. 1 +
i)
and K. H. and M fixed;
<I>(M,02,K,H) will assume a minimum with respect to 0 2 , where 02e:..[0~,
at eit1:ier o~,
or at both.
-s
e:
The proof will consist of showing that for Ki [1 -- M' 1 + M]'
Proof:
<I>
is a unimodal function of 0 2 having a maximum at:
. (l-K)M+e:
vHoZ
1
(l-K)M-e:
I H0 1
1
2
I21T exp [- ~.] dz
I21T exp [-
_co,
2
z
T]dz
-co
Taking derivatives with respect to 0 2 and setting them equal tp
zero we get:
or
=
which gives
2
0"
=
of]
2(l-K)Me:
H In[(l-K)M+e:]
(l-K)M-e:
(l-K)M-e:
(l-K)M+e:
18
as the only solution.
But since
~ ~
0t and since
~
approaches zero as
02 approaches either zero or infinitYt our above solution must correspond
22
Hence for a 2 restricted to the interval: [Oot
0l]t
to a maximum.
be smallest for either 0 2
Lemma 3.2:
For Kdl
= 002
-
-Me: t
or 0 2
P
will
or at both.
i] t Kt
1 +
~
QED
and M fixed t HM t0 2 tKtH:) ;is monotone
decreasing with respect to the product: H0 2 •
Proof:
~(Mt02tKtH)
= Pr[M-e:
~ B ~ M + e:]
with B ~ N(KM tH02 ).
But Ke[l - ~tl + ~] if and only if E[B]e:[M-e:tM+e:]. Hence ~ is monotone
4ecreasing with respect to·-the product H02.
QEC
As a result of this lemma t for Ke:[l - £.
Mt 1 + ~] t H fixed t
2 when .0 2 = 0 2 •
be minimized with respect to 0 2 for 02~[02
ot cr 1 ]
1
3.3
~
Pr[M-e:
will
Second Maxi-min Solution
Proceeding as in the fixed 0 2
Theorem 3.1:
~
case t we obtain:
The B which maximizes the minimum of ~(Mt02tK,H)
B ~ M+e:,J for
02e:[02ot 012 ]
=
can be written in the form:
for some ke:[Otl]
A
where S is the BLUE of S.
Proof:
Consider the partition of the K-H space given in Figure 2.1
of Chapter 2.
As in Theorem 2.1 we will proceed to eliminate from
consideration any point (KtH) which produces a minimum of
smaller than the minimum of
~
~
produced by some other point.
From Lemma 3.2 we have that for any point in region Dt
minimized at oi.
<
~
Thus consider the point (KtH) in region C.
at this point is also minimized at crf
~(Mtcrit KtH)
that is
is
If
then t by Theorem 2.1:
~(Mtcrit lt H) and this point can be eliminated from
~
19
consideration.
~(M,o~, K,H)
<
If, on the other hand,
~(M,oi, K,H)
<
~
is minimized at
0
2
0_ then:
~(M,oi, 1,H) so in either case a point
in C can always be eliminated in favor of one in D.
Similarly, if for any point (K,H) in B and the point
in A, i f cP is minimized for the same value of
0
(/:H/x'x, H)
2 at both points, then
by Theorem 2.1 the point in B can be eliminated in favor of the one in
A.
Otherwise we have either
or
depending on whether the point in A is minimized for
respectively.
0
2
Thus the points in B can all be eliminated in favor of
those in A.
Finally, since for any point in D, iP is minimized at
0
2
1,
we have
by Theorem 2.1 that any point in D can be eliminated in favor of the
point (l,(X'X)-l) in A.
We have, therefore, successfully restricted our consideration of
the whole K-H space to the boundary line:
H
= K2 (X'X) -1
for K€[O,l].
As it was pointed out in Theorem 2.1, this implies that the estimator
must be of the form:
B = KS
for K€[O,l]
As a result of this theorem, we can eliminate H as an argument
of CP, giving:
QED
20
A
where
d
2
•
f3 '\"
N(f3 t dO' )
= (X'X)-l
Since for any
Ke:[Ot 1 ]
2
2
~
is minimized at either 0'0
2
~(MtO'O' K)
to examine tJ"le two functions:
or 0'1 ' we proceed
2
and HM'O'l' K) as functions
of K for Kt;[O ,1].
This maximizing value was:
K
Tha~
= -1+ v'I+U
-1+
o
~iS
with 0
= 0'2 d
ME;
in[M+E;]
M-e:
a monotone function of 0 is established in the
following lemma:
Lemma 3.3:
Proof:
-1
+oII+'28
.
is a strictly decreasing
func~ion
of <S.
Taking the derivative of the function with respect to 0
yields
d [-1
+ v'i:+2I]
o ........;.,.- -1 - 0 + ~
do
02/~+20
.....,..--~_
But 0 ~ ~(1 - 11+20)2
=1
+ 0 .,. 11+20
with equality only when 0-0, but 0>0.
One further lemma is necessary.
Lemma 3.4:
Set y
then ~(M,O'2 ,y) is a strictly decreasing function of dO" 2
A
Var [13].
= (!'X) -1 O'~~ =
21
Proof:
Let R
= /1+20 = 1
= (R+1~(R-1)
then 0
do
dR
do
= R~1
R.
=
lh. = Sx:. • dR = _
So
and y
+ YOj
dR
do
2 2
(R+1) .
. -1
R
Let D = dcr 2
= DW,
and 0
Then ~
dD
iY.
=
So dD
say.
= W = i = (R+1)(R-1)
D
2D
.£x. • .M = _ (R-1)
do
dD
DR(R+1)
and
1
1
d ("y' ~D)
d( ~). iY. _ (R+1) (R-1)
1-lJ=
Y",D
dD /2
dD
dy
4D 3 R
Now
M+e: __M_
yD 1 / 2 D1 /2
~
=
1
z2
--- exp {- --} dz
~
2-
M-e;
M
---
YD1/2
D1 / 2
Using the above derivattives we have:
/2;
a! =
dD
[(R+1~ ~R-1) (M+e;) + M ] exp {_ 1 [ Mt~2 _ 1M
/2] 2}
2 yD
4D3/2 R
2D3/2
D
- [(R+1)(R-1)(M-e:) + M ] exp{- ~[ M-;2 - D1M/ ]2}
2
4D3/2 R
2D3/2
YDl
or
22
2
ili ddD~·=
1
ex~{ - _2D1 L(_yl -1) 2!'12 + E2 )} G
. - 4D3/2 R
y
(1
where G = [(R+1) (R-1) (M+E) + 2MR] exp {_
X
-l)ME
} -
yD
1
(Y -l)ME .
[(R+1)(R-1)(M-E) + 2MR] exp{
yD
}•
Now in the proof of Lemma 2.3 we had:
exp {_ 2e;M (1 -1)}
M-E
yD Y
= M+e;
so exp {-
(1y
-1)ME
yD
}
=
(1 -1)Me;
1
]2.
and exp { y YD· } = [M+e;
M-e; .
Noting that the coefficient of G in ~ is positive, we have:
~~ =
C {[ (R+1)
where C
>
(R-1) (M+€:) +
1
.1
2MR][:~:] 2_ [(R+1)(R-1)(M-E) + 2MR)[:~€f2}
0
or
which is clearly negative.
Hence ~. is a strictly decreasing function of D = d0 2 •
QED
Denoting the value of K which maximizes
i = 0,1 by Yi' we have
~(M,
2
2
ai' K) for 0i fixed
23
i
= 0,1
Using this notation in the following theorem we establish the
maxi-min solution for this Chapter.
Theorem 3.2:
The value of K that maximizes the minimum of
~(M, a2 , K) for a 2€[a6, ail is:
where k is the unique value in the open interval (Yl,Y O) for which
Proof:
From Lemma 3.3 we have that Yl < Yo and from Lemma 3.4
2
2
that ~(M,al'Yl) < ~(M,ao'YO).
then K
must be the desired maxi-min solution since for this value
~ is minimized with respect to a2 at a2 = ai, while for 0 2 = ai,
of K,
~
= Yl
On the other hand i f HM,ai,y l ) > ~(M,a~'Yl)
then for any value of K < Yl , the minimum of ~(M,a2,K) with respect to
is maximized at K == Y •
l
a2 must be less than
2
~(M,al,K)
2
(
2due
)
M,crO'Y
to.
the fact that both
l
decrease as K decreases from K
For Yl .:: K
~(M,crl,K)
~
=
~
(
2 ) and
M,aO,K
Y •
l
~ YO' HM,(J~,K) will be strictly increasing, while
will be strictly decreasing.
This is due to the fact that
~(M,a2,K) assumes a maximum with respect toKfor a2 fixed, at only one
point in the interval [0,1].
Hence as K approaches this value from
24
the left ~(M,02,K) must be increasing, while as K continues to increase
-
beyond the maximizing value,
Since we know that
~(M,a
2-
,K) will decrease.
2
2
~(!l,al'YO) < ~(M.aO'YO)
there must be a unique
value of k in the open interval (Yl,Y O) such that:
These two possible situations are depicted in Figure 3.1 as Case A
and Case B, respectively.
QED
1
1
/
o
o
-l--I-_--1
~;""'----
Yl YO
1
0
K
Case A
Figure 3.1. The two possible relationships between
HM,af,K)
---I--I_ _-l-_-4
l.ooIIr::::;,...
Y1 k
0
K
Case B
2
~(M,aO,K)
and
1
25
4.
GENERALIZATION TO THE CASE OF
4.1
p
SIS
General Structure
Consider the general linear model of full rank:
! is an n by
where:
1 vector of observations
X is an n by p matrix of known values
l is a p by
1 vector of unknown parameters
E is an n by 1 vector of random variables such that:
2
where cr
.
is fixed and known, and where Si is known to satisfy:
i=1,2, ... ,p
i
with Mi,
= l, ••• ,p
known constants.
We are considering only estimators which are linear in Y, so any
estimator B
c~n
be written:
B = C'y
where C is an n by p matrix.
We can write:
C'
-1
C'
=
..c.'2
C'
-p
wpere f'i is the ith row of
ct.
26
Then
e'y
-1....
B1
B
B2
=
B
P
= C'y
=
C'Y
.~
and we will proceed to determine C' by determining each
C~
~
separately,
and then forming C'.
For this case of p S's we also require that E[Bi] not depend on
Sj for j
~
i.
Now,
13 1
=
13 2
•
••
•
13p
where X. is the jth column vector in
""""'J
But for E[Bi] not to involve Sj for j
.£l.~
=0
X.
~
i, we must have;
for j ~ i.
The closeness criterion for the estimator B of the parameter 8i
i
is as in the single 13 case:
27
Our decision rule is then to consider only those vectors
fi
which
are orthogonal to X for j # i, and within that class choose the one
j
which maximizes the minimum of
~.
~
over the restricted parameter space.
4.2 Extension:
Fixed
In determining the estimator Bi of
ai
(J
2
Case
we will proceed exactly as
in Chapter 2, except that, where in Chapter 2 we considered K-H
space, we now consider Ki-H i space defined as follows:
Definition 4.1:
K.-H.
space is the set of all ordered pairs
~
~
(Ki,H i ) for which there exists a vector
~Ci
= Hi'
and Ci~
=0
~
such that
£i~
= Ki ,
for j ; i.
We can now proceed exactly as in Chapter 2, but we must first
determine the boundaries of Ki-H i space.
Consider an arbitrary £i:
c.
-~
=
where Z !Xj
-~-
=0
i,j'" 1,2, ... , p.
we have,
C. ... XA.,
-..I.
-~
Now -=-t
C'X
+
Zi'
-
(A!X'+Z')X
... -i
A'X'X + -i
Z'X ... ~
A'.X'X
=i
"':.=:L
where e! is a row vector of zeros except for a one in the ith position.
-~
So we have,
28
or
Now consider
so
where di is the ith diagonal term of (X'X)-l.
Now!iZi is the square of the length of
~,
and hence is non-negative.
Therefore the boundary of Ki-H i space is Hi
Ki-H i space must satisfy:
= Kidi
and points in
Hi·~ Kidi.
This result is summarized in the following lemma:
Lemma 4.1:
X ,X2 ' ••• ,
1
.£i~, H
~;
= .£i~'
Given the p linearly independent n component vectors:
the set of all ordered pairs (Ki,H i ) where Ki
and such that .£i~
=0
=
for j :f i for some ~, must:
satisfy:
2
K.d.
<
J. J.
where d
i
HJ.'
is the ith diagonal term of (X'X)-l.
QED
29
On the boundary Hi
2
= Kid
i
we must have
~
= 0,
so,
or
But
B
i
= -J.C~y
~
Ki-=1
e'(X'X)
.
-1
X'Y
-
~
A
Ki-1.1:::.
e'~
= KiB i ,
In Theorem 2.1 if we replace B with B , S with B , M with M ,
i
i
i
e with e i , B with 8 , and d with d , we have immediately that the
i
i
maxi-min solution is Bi
= YiB i
where
Hence the general estimator B is
Y2
! =
0
f3
4.3
Extension:
Here we have exactly the same model as in the fixed 0
now we have
222
0 e[oO'o~].
2
case, only
Referring to Theorems 3.1 and 3.2, and re-
placing B with B , B with B , M with M , e with e , B with B , and
i
i
i
i
i
d with d , we have immediately the result:
i
30
B
=
o
K
P
where K is determined by the method of Theorem 3.2.
i
31
5.
AN ALTERNATE ESTIMATOR
5.1
General Structure
A second estimator will be derived based on the second procedure
discussed in Chapter 1.
Defining T(B,02,K,H) as:
our decision rule is to choose the B which maximizes the minimum
ot T
over the restricted parameter space.
completely specified, and the above pro,::edure is no different from the
first procedure, giving the same solution. Hence we consider only the
222
e:[oO,ol] case. Consequently, the general structure is that of
°
Chapter 3, but with a new measure of goodness of the estimator.
That
i.s.:,....J¥e..,_are considering the single a linear model:
-E
tV
5.2
N(O,Io 2)
-
Form of the Estimator:
Ka
Proceeding as in Chapter 3 we have:
Theorem 5.1:
2
The B which maximizes the minimum of T(I3,o ,K,H)
over the restricted parameter space:
written in the form:
B
= K£
2 2
a 2e:[oO,ol],
IeI ~
M; can be
for some Ke:[O,l].
A
Proof:
Since
Pr["a-e:~J3~J3+e:]
does not depend on 13, T will be
minimized with respect to 13 when a
= M.
32
Consider again the partition of K-H space given in Figure 2.1 of
-1
2
For H > (X' X)·· let 02 be the value of
Chapter 2.
°2.J.n [° 02 ,°12 )
which
2
minimizes T at the point (l,H), and let 03 be the corresponding
minimizing value for the point (K,H) K # 1 in region C.
Then by
Theorem 2.1 we have:
2
T(M,03,K,H)
Hence any point in C can be eliminated in favor of some point in D.
For H
~
(!'X)
-1
2
let 02 now be the value of
°2
in the interval
[o~,oi) which minimizes T at the point (IH I!'X,H), and let 0; be
the minimizing value at the point (K,H) for K
<
IH Ix'x.
Hence by
Theorem 2.1 we have:
T(M,O;,K,H)
oS
T(M,O~,K,H)
<
T(M,O~,1if IX'!,H)
so we can eliminate any point in region B in favor of one in region A.
Lemma 3.2 implies that T(M,02,I,H) < 0 for any
i
and for H >
(X'X)-l, so all points in region D can be eliminated in favor of the
point [1, (X'X)
points
o~
-1
2
) at which T = 0 for all 0.
the curve:
H
= K2 (X'X)-1
Thus there remain only
for Ke[O,l).
We have seen before that this is sufficient to imply that the
estimator is of the form:
B
= KS
for some Ke[O,l).
QED
As a consequence of this theorem we can redefine T to be a function
of K and
°2
only:
33
5.3
2
Properties of T(K.cr ) for K Fixed
The problem which presents itself here is that for any fixed K
2
value, we cannot immediately conclude that T(K,cr ) will assume a minimu~
2
2
at either cr O' or cr1 " It will therefore be necessary to investigate
2
properties of T(K,cr ) as a function of cr
2
for K fixed.
Important
properties are:
5.3.1
By virtue of the one to one correspondence between cr
value of K which maximizes
~
5.3.3
2
lim T(K,cr )
cr 2-+co
= 1 - £M
5.3.4
If K
5.3.5
If K > 1
2
2
for which T(K,cr ) >
= o.
2
If K < 1 - £ then lim T(K,cr )
M
cr 2+0
then lim T(K,cr Z)
cr 2+0
= -1.
1
= - 2".
- £M then lim T(K,cr 2) = o.
cr 2+0
written as:
2
dT
(K.cr )
i
dcr 2
2
where F(cr )
= -12..F("'Z)
cr 2
V
.,
Da
= A exp{- ~} +
cr
and the
in Chapter 2, we know that for
any fixed K there will exist values of cr
5.3.2
2
. i ve constant
pos~t
B exp {cr
~} + C exp {-
-I}
cr
o.
34
and where A
=K -
1 - W; B
=1
- K- W
C=2KW
W =£·
M'
Now, since a > band c > b, the sign of F(02) as 0 2
+
0 will be
i·s
determ i ne d by t h e s i gn 0 f B,whi ch i s negat i vee Hence ()T(K.02)
2
ao
2
negative as 0 + 0, and since for K > 1-£ lim T(K,02) = 0, T(K,02)
M. 2
o +0
must approach zero through negative values.
QED
Properties 5.3.1, 5.3.2, 5.3.3, and 5.3.4 imply that for K S 1 -
~, T(K,02) will have at least one local maximum, while properties 5.3.1,
5.3.2, 5.3.5, and Lemma 5.1 show that for K
>
2
e:
1 - M' T(K,o ) will have
at least one local maximum, and at least one local minimum.
2
We now proceed to investigate T(K,o ) by evaluating its first and
second derivatives.
Recall from Lemma 5.1 that:
The second derivative of T with respect to 0
2
is then:
= - ~F(02) + ~F' ( i )
o
2
where F'(o )
0
Aa
a
Bb
b
Cc
c
=""4
exp {- 2"} +""4 exp {- 2"} +""4 exp {- 2"}'
o
a
0
0
a
0
We now consider the sign of the second derivative of T, at values
2
of 0 which give rise to local maxima or minima.
find that:
At such values we
35
=~[B(b ....a)exp{.;. ~} + c(e-a)exp{-~}]
a
where b-a
a
= 2M2 (K-l)W
2
a
< 0
K
2
c-a
= ~[W2(K2_l)
2
- (K_l)2 + 2(K-l)W] < 0
2K d
and
C
= 2WK
B
= l-W-K
> 0
Hence for K s 1 -
~
0 if and only if K
E
M
and
a2T(K
~
0 2)
fixed,!
a(a 2 )2
1 - W,
< 0,
This, together with the
earlier result that there exists at least one local maximum implies
2
that for K s 1 - ~, T(K,a ) is unimodel,
•
E
2
So for K ~ 1 - M' T(K,a ) will assume a minimum with respect to
222
a at either aO' or aI'
2
2
) = 0 at one and only one value
For K > 1 - ~ we will h ave a T(K.a
2
2
M
a(a )
a2T(K.02)
Call rlhis value a*. Then for a 2 < a * ,
> 0, and for
2)2
a(a
2
* a2T(K.a 2) < 0 This implies that T can have at most one local
a > a ,..::.....:..ll?~L.
a(a 2 )2
•
maximum, and at most one local minimum. Since we determined earlier
that T will have at least one of each, it has exactly one of each.
Figure 5.1 depicts a representation of this function.
is seen from Figure 5.1 that for K
E
1 - M and fixed, that
T(K,a 2) could assume a minimum at a value of o2 not equal to 02 or
0
It
>
However, that we are still justified in restricting our search to
finding the K which maximizes the minimum of {T(K,O~), T(K,oi)} is
seen from the fact that for K
•
= 1,
the above minimum is zero, and we
would always, therefore, take this solution before one which would give
a negative minimum.
36
2
T(K,a )
0'
2
2
Figure 5.1. Typical representation of T(K,a ) for K > 1 - ~ and fixed
5.4
•
Solution for K
We now seek to find the value of K which maximizes the minimum of
2
2
T(K,a o)' and T(K,a l ). At K = 1 both functions are
As K decreases from 1, both functions increase (Theorem 2.2,
the two functions:
zero.
Lemmas 3.3 and 3.4) until K assumes the value which maximizes ~(K,cr~).
2
2
Call this value KO' If T(KO,a O) S T(KO,a l ), KO is the maxi-min solu2
Otherwise as we continue to decrease K T(K,a O) decreases while
tion.
T(K,a l2) increases.
2
If T(Kl,a l )
~
2
Call Kl the value of K which maximizes ~(K,al)'
2
T(Kl,a O)' then Kl is the maxi-min solution. Otherwise
there will exist a value K2 :
Kl < K2 < KO for which T(K2,a~)
=
2
T(K 2 ,a l ) and K is the maxi-min solution.
2
That K cannot be the maxil
min solution is proved in the following theorem:
Theorem 5.2:
Proof:
In the notation given above,
2
It was shown earlier tha't T(Kl,a ) > 0, hence i t is
sufficient to show that:
l
37
>
o.
Recall that
Evaluating at the value of
(i
2
which gives K as the maximl,1m of HK,cr )::
so
•
b
exp {- - }
2
=
cr
(l+W) exp {l-W
;}.
cr
Also
a
exp {- - }
2
=
2
adK
[1+W] 2M2W(1-K)
l-W
=
2
cdK
[l+W] 2M2W(1"';K)
l-W
•
cr
and
exp {_ -S.}
2
cr
So
2
aTCKl.cr )
2
acr
[1-W] 2M2W(1-K)]
l+W
=
38
2
2
2
cdK
adK -cdK -1
[
2DKW [ l-W,] 2M2W(1-K) ( -1) [l-W] 2M2W(1-K)
+
l+W'
~W l+W
2
cr
)
11
l
J .
1 _ W(l+K) + (l-K)
1
4
4W - 2
which is minimized when K is as large as possible:
K=
l~
so:
Set
W-l
T(W)
1 [l-W]-2= l+W l+W
then
W-l
dT(W)_
1
l-W 2
l-W
dW - 2(W+l) [l+W]
In [l+W]
for W in the open interval
Hence T(W) for
We:(O~l)
<
0
(O~l).
will always be less than T(O)
= 1; .!.~.~
W-l
1
l+W
[l-W] 2
l+W
1
<.
So
W-l
-1 [l-W] 2
l+W
l+W
+ 1
>
0
>
o.
which implies that
aT(Kl~cr
acr 2
2
)
The results of this Chapter are summarized in the following
,theorem.
QED
39
Theorem 5.3:
The estimator B
= fly
which maximizes the minimum
')
of T(I3,o-,K,H)
over the restricted parameter space:
I13 I
2 2
°21::[00'01]
.::; M,
is
B
= KI3
2
ZO i f T(ZO'OO)
where K
=
[
Z2 otherwise
where Zo is the value of K which maximizes
~(K
2
,(0)'
and where Z2 is the
largest value less than 1 such that:
The generalization of this result to the p 13 case follows exactly
as in Chapter 4, where each estimator is obtained separately, and the
composite result can be written as:
B
= D].
where D is the diagonal matrix whose ith diagonal term is determined
as in this Chapter, with 13 replaced by Bi, M replaced by M ,
i
by
I::
i , and d replaced by die
I::
replaced
40
6.
SOME PROPERTIES OF THE ESTIMATORS
6.1
Dependence on Ki
Each of the estimators can be written as:
B
= DB,
with D a
diagonal matrix having ith diagonal term equal to K , where K is
i
i
determined as in Chapters 2, 3, 4,
0
r 5, depending on the parameter
space restrictions, and on the chosen decision procedure.
Hence
properties of this estimator are dependent exclusively on the K
i
values, together with the known properties of the best linear unbiased
Ji.
estimator:
6.2
Bounds on Ki
If we define y .. as:
~J
-1+ 11+20 ..
Yij
for i
~J
=
0 ..
with 0 ..
~J
~J
= 1,2, ••• ,p;
=
2
Mi+E i
O'jd i
]
In [
M.E.
Mi-E i
~ ~
= 0,1
and j
then we have seen that:
o
<
Yi ,1
< K. < y,
~
for all cases.
6.2.1
Hence we present properties of Y :
ij
= ° for
j
= 0,1
lim Yij
= ° for
j
= 0,1
for j
= 0,1
Mi~e:i
6.2.3
1
~J
lim y ..
E('Mi
6.2.2
<
~o
lim Yij
Mi~
=1
41
6.2.4
6.3
D~
Since B =
= K.S.,
E[Bi]
~
IE [B.]
~
~
Bias
has expected value
E[~] = D~,
or for any individual
the absolute value of the bias is:
s. I = IKS. -S . I =
~
~
~
(l-K.)
~
Is. I
~
which, under our restrictions on the parameter space, is bounded by:
IE[B.]
~
s. I
~
~ (l-K.)M
~
-4
6.4
•
Variances and Covariances
The variance-covariance matrix of ! is Var[B] = D(X'X)-lDcr 2
2
2
2
which gives Var[B i ] = K~Var[S~] = K~dicr and Cov[B.,B.] = K.K.Cov[S"Sj]'
....
........
~
J
~ J
~
A
6.5
A
On Limiting Properties
Considering the n by p matrix X to be fixed, m replications give:
Yl
Y
2
X
~1
X
~2
=
Y
-m
..§.
.
-X-
(mnx1) (mnxp)
+
=
(px1)
E
-m
(mnx1)
Then the overall XIX matrix is:
X§.
+~
42
X'X
= mX'x
Hence the effect of m replications is to decrease the diagonal
term of the (X'X)
-1
1
matrix by a factor of;.
for Y , d is reduced by a factor of
i
ij
Thus in the expression
~, which causes Yij to increase.
This limiting properties of KS can be established through knowledge
of limiting properties of S, and through the fact that K + 1.
43
7.
A NUMERICAL EXAMPLE
7.1
General Problem
In order to obtain a feeling for the values of K which may occur
for the derived estimators under different restrictions on the parameter
space and different c; values, as well as for the probabilities under
these conditions of the estimator being contained in the interval
[M-c;, M+c;], we consider an example of a simple linear regression of one
independent variable on one dependent variable.
Y.J.
= ~ + SX.J. + E.
E.
~
J.
tV
The model is then:
2
N(O,cr )
where E and E are independent for j # i.
i
j
For simplicity we will consider estimation of S only; we could
consider the model as:
Yi - Y
= S(X.-X)
+
J.
E.J.
For each estimator of S, either for cr
2
fixed, for the estimator
of the first procedure, which we will refer to as Type A, or for
the estimator of the second procedure, which we will refer to as
Type B, we will be interested only in determining the appropriate
value of K for which B
Since S
tV
= KS.
N(S,dcr 2 ) where for our above case d
since K decreases as dcr
2
=
l~~ and
E(Xi -X) 2'
increases, we would like, for illustrative
n
- 2
purposes, an independent variable for which E (X.-X) is small.
i=l
J.
Considering the 10 variables on page 352 of Draper and Smith (2), X
3
is found to have the smallest variance, and hence is chosen as the
44
independent variable.
X2 is found to have a significant regression
on X3t and is therefore used as the dependent variable.
2
is: 6.l248 t and the estimate of 0 is: .0467.
7.2
Fixed
0
2
Case
We first consider values of K for the fixed (known)
2
The BLUE of S
0
2
The
following seven values of
0
.Olt .05 t .It lt and 10.
For the limits of the absolute value of S we
M = 7 t lOt 20 t and 50.
consider separately:
we consider e
will be separately considered:
case.
.0001t .001t
Also t for the value of e
= .05 t .10t .50 t It 5 t and 10 where appropriate. Note that
since we must have:
e
<
Mt certain combinations of the above stated
input values are impossible and are denoted N.A. (for "not applicable")
in the following tables.
In the following seven tables for each applicable combination of M
and e two values are given.
The upper value t given to four decimal
place accuracYt is the calculated value of Kt while the lower value is
the increase in the probability of being within the interval [M-e t M+e]
which this value of K gives rise to.
This latter probability is the
improvement over the BLUE probabilitYt which is given for each value of
e (the BLUE probability does not depend on S).
The superscripted plus
and minus signs after a 1 or a 0 mean respectively that the actual
number is a little larger t or smaller than that given t but the given
number is accurate to 4 decimal places if it is a K value t and to 5
decimal places if it is a probability.
It should be pointed out that in the following tables t the probabilities quoted for the derived estimator are the smallest possible for
the given range of M values t and that the true probability could be
considerably greater.
45
Table 7.1.
~
0*
Values
K and corresEonding probability increases:
Pr [M-€~Ka·SM+€]-Pr [M-e:~a.s;M+€] for fixed 0 2 case -0 2 = .0001
A
7
10
20
50
1.0000-
1.0000-
1.0000-
1.0000 -
.05
pr[M-e:~a$.M+8
.99795
.00000+
1.0000 -
.00000+
1.0000-
.00000+
1.0000-
.00000+
1.0000 -
.10
1.00000
.00000
.00000
.00000
.00000
1.0000-
1.0000-
1.0000 -
1.0000 -
.50
1.00000
.00000
.00000
.00000
.00000
1.0000 -
1.0000 -
1.0000 -
1.0000 -
1
1.00000
.00000
.00000
.00000
.00000
1.0000 -
1.0000 -
1.0000 -
1.0000 -
5
1.00000
.00000
.00000
N.A.
N.A.
.00000
1.0000 -
.00000
1.00000 -
10
1.00000
.00000
.00000
]
46
Table 7.2.
~:t~~:~~~~~+:]~P~[~~:~~~~~~]gf~~O~~:~~i~~~:~~e::es:
02
~
= .001
7
10
.9999
1.0000
-
20
50
1.0000 -
1.0000 -
Pr [H-e;~S~H+e;]
.67042
.05
.00001
.9999
.00001
1.0000 -
.00000+
1.0000 -
.0000+
1.0000 .94882
.10
.00001
.9999
.00000+
1.0000 -
.00000+
1.0000 -
.00000+
1.0000 1.00000
.50
.00000
.9999
.00000
.00000
.00000
1.0000 -
1.0000-
1.0000 1.00000
1
.00000
.00000
.00000
.00000
.9999
1.00000 -
1.00000 -
1.00000 -
5
1.00000
.00000
.00000
N.A.
N .A.
.00000
.00000
1.0000 -
1.0000 -
10
1.00000
.00000
.00000
47
Table 7.3.
Values
0;
K and
corre~ponding
probability increases:
Pr[M-E~KB~M~]-Pr[M$E$B-M+E] for fixed 0 2 case -02
=
.01
~
~
7
.9995
10
20
.9997
.9999
50
Pr[M-ESsSM+E]
1.0000 -
.05
.24215
.00006
.00003
.00001
.9995
.9997
.9999
.00000+
1.0000-
.10
.46252
.00011
.00005
.00001
.9995
.9997
.9999
.00000+
1.0000-
.50
.99795
.00001
.00000+
.00000+
.9995
.9997
.9999
.00000+
1.0000-
1
1.00000
.00000
.00000
.00000
.9993
.9997
.9999
.00000
1.0000 -
5
1.00000
.00000
.00000
.00000
N.A.
N.A.
.9999
.00000
1.0000 -
10
1.00000
.00000
.00000
48
Table 7.4.
~
Values of K and
corres~onding
probability increases:
Pr[M-e~KS~M+el-Pr[M-e:~8~M+e;] for fixed 0'2 case -(12 = 0.1
A
7
.9947
10
20
50
.9974
.9993
.9999
.07767
.05
J
Pr [M-e::;;;8:;;;M+e:]
•00021
.00010
.00003
.00000+
.9947
.9974
.9993
.9999
.15460
.10
.00041
.00020
.00005
.00001
.9947
.9974
.9993
.9999
.67042
.50
.00129
.00063
.00016
.00003
.9947
.9974
.9993
.9999
.94882
1
.00062
.00030
.00008
.00001
.9934
.9971
.9993
.9999
1.00000
5
.00000
.00000
.00000
.00000
N.A.
N.A.
.9993
.9999
1.00000
10
.00000
.00000
49
Table 7.5.
Values of K and
corres~onding
probabi1it
fixed 0
Pr(M-gSKSSM+g]-pr(M-E~e~M+E] for
02
.05
=
~
increases:
2 case--
A
7
.9973
10
20
50
.9987
.9997
.9999
Pr[M-E~S~M+E]
.10967
.05
.00015
.00007
.00002
.00000
.9973
.9987
.9997
.9999
.21727
.10
.00028
.00014
.00003
.00001
.9973
.9987
.9997
.9999
.83205
.50
.00057
.00028
.00007
.00001
.9973
.9987
.9997
.9999
1
.99420
.00007
.00003
.00001
.00000+
.9967
.9986
.9997
.9999
1.00000
5
.00000
.00000
.00000
.00000
N.A.
N.A.
.9996
.9999
10
1.00000
.00000
.00000
50
Table 7.6.
~
Values of K and
corres~onding
probability increases:
Pr[M-E~Ka~M+E]-Pr[M-E~B~M+E] for fixed 0 2 case -0 2 = 1.0
A
7
.9514
10
20
50
.9750
.9935
.9990
Pr[M-E$.B$.M+E]
.02460
.05
.00064
.00032
.00008
.00001
.9514
.9750
.9935
.9990
.10
.04917
.00127
.00063
.00016
.00003
.9513
.9750
.9935
.9990
.24215
.50
.00606
.00303
.00077
.00012
.9511
.9749
.9935
.9990
.46252
1
.01049
.00524
.00133
.00021
.9405
.9727
.9934
.9989
.99795
5
.00053
.00027
.00007
.00001
N.A.
N.A.
.9929
.9989
10
1.00000
.00000
.00000
51
Table 7.7.
Values of K and
corres~onding
probabi1it
fixed 0
Pr[M-e~KS~M+e]-Pr[M-E~S~M+E] for
02
10.0
=
~
increases:
2 case--
A
7
.7210
10
20
50
.8222
.9417
.9897
Pr[M-E~S~M+E]
.00778
.05
.00161
.00088
.00024
.00004
.7210
.8222
.9417
.9897
.01556
.10
.00321
.00175
.00049
.00008
.7207
.8221
.9417
.9897
.07767
.50
.01597
.00873
.00243
.00040
.7199
.8218
.9416
.9897
.15460
1
.03145
.01720
.00479
.00080
.6845
.8103
.9406
.9897
.67042
5
.09632
.05347
.01511
.00252
N.A.
N.A.
.9366
.9896
.94882
10
.00716
.00121
52
As well as the absolute increase in probability for the derived
estimator versus that for the BLUE estimator, it is also interesting
to have a feeling for the percentage increase in the probability.
Table 7.8 gives such percentages for the M = 10 case.
A
A
Percent increase in Pr[M-E~KS~M+E] over
for M=10 and a2 fixed
Table 7.8.
~
.05
.10
.50
Pr[M-E~S~M+E]
5
1
.0001
0.00%
0.00%
0.00%
0.00%
0.00%
.001
0.00%
0.00%
0.00%
0.00%
0.00%
.01
0.01%
0.01%
0.00%
0.00%
0.00%
.05
0.06%
0.06%
0.03%
0.00%
0.00%
.1
0.13%
0.13%
0.09%
0.03%
0.00%
1
1.30%
1.28%
1.25%
1.13%
0.03%
10
11. 31%
11.25%
11.24%
11.13%
7.98%
7.3
222
~l]
Case
We now turn to the Type A and Type B estimators.
the same values of M and E are used as in the fixed a
E value, BLUE probabilities:
2
at a •
l
For consistency,
2
case.
For each
Pr[M-E~S~M+E] are calculated at both a~ and
In the following tables, in the BLUE probability column, for
each value of E the upper probability was calculated for
lower for
0
2
1
,
0
2
0 , and the
As before, fer each pair ofM,E values, a value
of K is presented, but now there are two probability deviations for
each M,E pair.
For each derived K value the functions
T(M,d~,K).and'T(M,ai,K) are evaluated and are presented respectively
in the'tables.
2
Recall that the function T(M,a ,K) gives the increase
53
in closeness of the derived estimatdr over the BLUE estimator
when cr
2
is the true value.
That some of these deviations are
negative reflects the fact that the Type A estimator is not uniformly
better than the BLUE estimator over the restricted parameter space.
Recall also that the probability deviations given are the smallest
possible over the restricted parameter space.
A point in the interior
of the restricted parameter space will give rise to even larger deviations for our estimator over the BLUE probability.
The strategy for selecting the cr
2
our estimate of cr:
2
intervals was that we considered
.0467 as being the "true" value, and then we
proceeded to consider intervals which did, or did not, contain this
value.
We started with an interval which did contain the true value,
but which was quite wide, and hence reflected a vagueness as to where
the true value of cr
2
was.
We then considered two more intervals which
were progressively less vague, and finally an interval which was
"wrong" on either side.
These intervals were:
1,0], [.01, .1], [.0001, .01] and [.1,10].
[.0001, 10.0], [.001,
After these cases were
evaluated, it was decided that three additional intervals might be
worth investigating, the first reflecting very little vagueness and
the following two with relatively large cr~'s.
intervals were:
follow.
These additional
[.03, .01], [1,10], and [10, 20].
The tables
54
Table 7.9.
Values of K, probability deviations, and BLUE probabilities
for Type A estimator -- 02 g [.0001, 10.0]
~
7
~
10
20
50
.9873
.9911
.9955
.9982
-.99007
.00010
-.99011
.00007
-.99014
.00003
-.99016
.00001
.9808
.9866
.9933
.9973
-.98414
.00030
-.98423
.00021
-.98434
.00010
-.98411
.00004
.9256
.9479
.9739
.9897
-.91662
.00572
-.91854
.00379
-.92069
.00164
-.82880
.00040
.8553
.8986
.94923
.9897
-.82432
.02107
-.83227
.01314
-.84071
.00470
.00000
.00080
.6845
.8103
.9406
.9897
.00000
.09632
.00000
.05347
.00000
.01511
.00000
.00252
N.A.
N.A.
.9366
.9896
.00000
.00716
.00000
.00121
.05
.10
.50
1
5
10
Pr[H-g':;;S':;;M+g]
.99795
.00778
1.00000
.01556
1.00000
.07767
1.00000
.15460
1.00000
.67042
1.00000
.94882
55 .
Table 7.10.
~
Values of K, probability deviations, and BLUE probabilities
for Type A estimator -- 0'20:[.001, LO)
A
10
20
50
.9788
.9851
.9935
.9990
-.64540
•00042
-.64556
.00026
-.61198
.00008
.9740
.9817
.9935
.9990
-.89867
.00098
-.89906
.00059
-.66951
.00016
-.12714
.00003
.9513
.9750
.9935
.9990
-.00054
.00606
-.00000
.00303
-.00000
.00077
.00000
.00012
.9511
.9749
.9935
.9990
.00000
.01049
.00000
.00524
.00000
.00133
.00000
.00021
.9405
.9727
.9934
.9989
.00000
.00053
.00000
.00027
.00000
.00007
.00000
.00001
N.A.
N.A.
.9929
.9989
.00000
.00000
.00000
.00000
7
.05
. -.21253
.00001
.10
.50
1
5
10
Pr[M-e:SSSM+e:]
.67042
.02460
.94882
.04917
1.00000
.24215
1.00000
.46252
1.00000
.99795
1.00000
1.00000
.?6
Table 7.11.
~
Values of K, probability deviations, and BLUE probabilities
for-Type A estimator -- a 2E[.01, .1]
A
10
20
50
.9947
.9974
.9993
.9999
-.00493
.00021
-.00244
.00010
-.00062
.00003
-.00010+
.00000
.9947
.9974
.9993
.9999
-.00854
.00041
-.00423
.00020
-.00107
.00005
-.00017
.00001
.9947
.9974
.9993
.9999
-.00045
.00129
-.00022
.00063
-.00006
.00016
-.00001
.00003
• 9947
.9974
.9993
.9999
-.00000
.00062
.00000
.00030
.00000
.00008
.00000
.00001
.9934
•9971
.9993
.9999
.00000
.00000
.00000
.00000
.00000
.00000
.00000
.00000
N.A.
N.A.
.9993
.9999
.00000
.00000
.00000
-000000
7
.05
.10
.50
1
5
10
Pr[M-E:~S~M""'E:]
.24215
•07767
.46252
.15460
.99795
.67042
1.00000
.94882
1.00000
1.00000
1.00000
1.00000
57
Values of K, probability deviations, and BLUE probabilities
for Type A estimator -- cr 2 d.0001, .01]
Table 7.12.
~
A
50
10
20
.9995
.9997
.9999
1.0000-
-.00057
.00006
-.00028
.00003
-.00007
.00001
-.00001
.00000
.9995
.9997
.9999
1.0000 -
-.00000
.00011
-.00000
.00005
.00000
.00001
.9995
.9997
.9999
.00000
.00001
.00000+
.00000
.00000+
.00000
.9995
.9997
.9999
.00000
.00000
.00000
.00000
.00000
.00000
.9993
.9997
.9999
.00000
.00000
.00000
.00000
.00000
.00000
N.A.
N.A.
.9999
1.0000 -
.00000
.00000
.00000
.00000
7
.05
.10
Pr[M-c:SeSM+e::]
.99795
.24215
1.00000
.46252
.00000
.00000+
1.0000 -
.50
1.00000
.99795
.00000+
.00000
1.0000 -
1
1.00000
1.00000
.00000
.00000
1.0000 -
5
.00000
.00000
10
1.00000
1.00000
1.00000
1.00000
58
Table 7.13.
~
Values of Kt pro~2bi1ity deviations t and BLUE probabilities
for Type A estimc~or -- a 2€[.l t 10]
A
10
20
50
.8634
.9003
.9475
.9896
-.06887
.00102
-.06923
.00066
-.06965
.00024
-.03071
.00004
.86276
.8999
.9473
.9897
-.13699
.00205
-.13772
.00103
-.13856
.00048
-.06068
.00008
.8452
.8876
.9417
.9897
-.58140
.01135
-.58557
.00718
-.58688
.00243
-.20514
.00040
.8036
.8581
.9416
.9897
-.76727
.02694
-.77801
.01620
-.58408
.00479
-.12002
.00080
.6845
.8103
.9406
.9897
-.00000
.09632
-.00000
.05347
-.00000
.01511
.00000
.00252
.9366
.9896
.00000
.00716
.00000
.00121
7
.05
.10
.50
1
5
N.A.
N.A.
10
Pr[M-€'ss'sM+€]
.07767
.00778
.15460
.01556
.67042
.07767
.94882
.15460
1.00000
.67042
1.00000
.94882
59
Table 7.14.
~
Values of K, probability deviations, and BLUE probabilities
for Type A estimator -- (j2e;[.03, .07J
A
7
.9963
10
20
50
.9982
.9995
.9999
.05
-.00009
.00017
-.00004
.00008
-.00001
.00002
+
-.00000+
.00000
.9963
.9982
.9995
.9999
-.00016
.00034
-.00008
.00017
-.00002
.00004
-.00000+
.00001
.9963
.9982
.9995
.9999
-.00018
.00088
-.00009
.00043
-.00002
.00011
-.00000+
.00002
.9962
.9982
.9995
.9999
-.00000+
.00023
-.00000+
.00011
-.00000+
.00003
+
-.00000+
.00000
.9953
.9980
.9995
.9999
.00000
.00000
.00000
.00000
.00000
.00000
.00000
.00000
N.A.
N.A.
.9995
.9999
.00000
.00000
.00000
.00000
.10
.50
1
5
10
Pr[M-e;SSSM+e]
.14128
.09277
.27817
.18429
.92493
.75611
.99963
.98023
1.00000
1.00000
1.00000
1.00000
60
Table 7.15.
~
Values of K, probability deviations, and BLUE probabilities
for Type A estimator -- 02 e [1.0, 10.0]
A
10
20
50
.7296
.8222
.9417
.9897
-.01521
.00160
-.01229
.00088
-.00508
.00024
-.00099
.00004
.7295
.8222
.9417
.9897
-.03041
.00320
-.02455
.00175
-.01015
.00049
-.00198
.00008
.7254
.8221
.9417
.9897
-.14852
.01596
-.11715
.00873
-.04846
.00243
-.00946
.00040
.7199
.8218
.9416
.9897
-.26282
.03145
-.20228
.01720
-.08400
.00479
-.01641
.00080
.6845
.8103
.9406
.9897
-.00391
.09632
-.00706
.05347
-.00421
.01511
-.00085
.00252
.9366
.9896
-.00000+
.00716
-.00000+
.00121
7
.05
.10
.50
1
5
N.A.
N.A.
10
Pr[M-e'sS'sM+e:]
.02460
.00778
.04917
.01556
.24215
.07767
.46252
.15460
.99795
.67042
1.00000
.94882
61
Values of K, probability deviations, and BLUE probabilities
for Type A estimator -- a 2 E[10, 20]
Table 7.16.
~
A
10
20
50
.6059
.7242
.8947
.9798
.00088
.00195
.00037
.00112
.00005
.00033
.00000
.00006
.6059
.7241
.8947
.9798
.00176
.00391
.00075
.00223
.00009
.00066
.00000+
.00011
.6056
.7240
.8947
.9798
.00878
.01949
.00373
.01114
.00046
.00331
.00002
.00057
.6047
.7236
.8947
.9798
.01757
.03865
.00745
.02210
.00093
.00658
.00003
.00113
.56706
.7093
.8929
.9797
.07757
.14912
.03219
.08637
.00389
.02604
.00013
.00449
N.A.
N.A.
.8865
.9795
.00316
.02516
.00011
.00439
7
.05
.10
.50
1
5
10
Pr [M-E~I3~M+E]
.00778
.00550
.01556
.01100
.07767
.05496
.15460
.10967
.67042
.50943
.94882
.83205
62
Table 7.17.
~
.05
Values of K, probability deviations, and BLUE probabilities
for Type B estimator -- o2d.0001, 10.0]
A
7
1.0000 +
.00000+
.00000
1.0000-
10
1.0000 +
.00000+
.00000
1.0000 -
20
1.0000 +
.00000+
.00000
1.0000 -
50
1.0000 +
.00000+
.00000
1.0000 -
.10
.00000+
.00000
1.0000 -
.00000+
.00000
1.0000 -
.00000+
.00000
1.0000 -
.00000+
.00000
1.0000 -
.50
.00000+
.00000
1.0000 -
.00000+
.00000
1.0000 -
.00000+
.00000
1.0000-
.00000+
.00000
1.0000 -
1
.00000+
.00000
1.0000 -
.00000+
.00000
1.0000 -
.00000+
.00000
1.0000-
.00000+
.00000
1.0000 -
5
.00000+
.00000
.00000+
.00000
N.A.
N.A.
.00000+
.00000
1.0000 -
.00000+
.00000
1.0000 -
10
.00000+
.00000
.00000+
.00000
Pr[M-E:~S~M+E:]
.99795
.00778
1.00000
.01556
1.00000
.07767
1.00000
.15460
1.00000
.67042
1.00000
.94882
63
Table 7.18.
~
.05
Values of K, probability deviations, and BLUE probabilities
for Type B estimator -- a 2 e:[.001, 1.0]
A
7
.9999
.+
.00000+
.00000
.9999
.10
+
.00000+
.00000
.9999
10
1.0000+
.00000+
.00000
1.0000+
.00000+
.00000
1.0000-
20
1.0000+
.00000+
.00000
1.0000 +
.00000+
.00000
1.0000-
50
1.0000+
.00000+
.00000
1.0000+
.00000+
.00000
1.0000-
.50
.0000
.00001
.9999
.00000
.00001
1.0000-
.00000+
.00000
1.0000-
.00000+
.00000
1.0000 -
1
.00000
.00002
.9999
.00000
.00001
1.0000-
.00000+
.00000
1.0000 -
.00000+
.00000
1.0000 -
5
.00000+
.00000
.00000+
.00000
N.A.
N.A.
.00000+
.00000
.00000+
.00000
1.0000 -
1.0000 -
.GOOOO
.00000
.00000
.00000
10
Pr[M-e:$.S$.M+e:]
.67042
.02460
.94882
.04917
1.00000
.24215
1.00000
.46252
1.00000
.99795
1.00000
1.00000
64
Table 7.19.
M
~
Values of K, probability deviations, and BLUE probabilities
for Type B estimator -- cr 2 e;[.01, 0.1]
A
7
.9993
10
20
.9996
.9999
.05
.00005
.00005
.00003
.00003
.00001
.00001
.9993
.9997
.9999
.10
.00001
.00001
.00005
.00005
.00001
.00001
.9995
.9997
.9999
.50
•
.00001
.00025
.00000+
.00012
.00000+
.00003
.9995
.9997
.9999
.00000
.00012
.00000
.00006
.00000
.00001
.9993
.9997
.9999
.00000
.00000
.00000
.00000
.00000
.00000
N.A.
N.A.
.9999
50
1.0000 +
.00000+
.00000
1.0000 +
.00000+
.00000
1.0000 +
.00000+
.00000
1.0000 -
1
.00000+
.00000
1.0000 -
5
.00000
.00000
1.0000 -
10
.00000
.00000
.00000
.00000
Pr[M-E;~S~M+e;]
.24215
.07767
.46252
.15460
.99795
.67042
1.00000
.94882
1.00000
1.00000
1.00000
1.00000
65
Values of K, probability deviations, and BLUE probabilities
for Type B estimator -- a 2 E[.0001, .01]
Table 7.20.
~
.05
A
7
1.0000+
.00000+
.00000
1.0000 -
10
1.0000 +
.00000+
.00000
1.0000 -
20
50
1.0000 +
.00000+
.00000
1.0000 -
1.0000 +
.00000+
.00000
1.0000 -
.10
.00000+
.00000
1.0000 -
.00000+
.00000
1.0000 -
.00000+
.00000
1.0000 -
.00000+
.00000
1.0000 -
.50
.00000+
.00000
.00000+
.00000
.00000+
.00000
.00000+
.00000
-
1.0000 -
.00000
.00000
.00000
.00000
.00000
.00000
1.0000-
1.0000 -
1.0000 -
1.0000 -
.00000
.00000
.00000
.00000
.00000
.00000
.00000
.00000
N.A.
N.A.
1.0000 -
1.0000 -
.00000
.00000
.00000
.00000
1.0000 -
1.0000 -
.00000
.00000
1.0000
1
5
10
Pr[M-E'::(3~M+E]
.99795
.24215
1.00000
.46252
1.00000
.99795
1.00000
1.00000
1.00000
1.00000
1.00000
1.00000
66
Table 7.21.
~
Values of K, probability deviations, and BLUE probabilities
for Type B estimator -- a2e;[.1, 10J
A
7
.9905
10
20
50
.9953
.9988
.9998
.05
.00007
.00007
.00004
.00004
.00001
.00001
+
.00000+
.00000
.9905
.9953
.9988
.9998
.10
.00015
.00015
.00007
.00007
.00002
.00002
+
.00000+
.00000
.9911
.9956
.9989
.9998
.00069
.00069
.00034
.00034
.00008
.00008
.00001
.00001
.9947
.9974
.9993
.9999
.00062
.00081
.00030
.00040
.00008
.00010
.00001
.00002
.9934
.9971
.9993
.9999
.00000
.00320
.00000
.00139
.00000
.00032
.00000
.00005
N.A.
N.A.
.9993
.9999
.00000
.00017
.00000
.00002
.50
1
5
10
Pr[M-e;':sS':sM+e;]
.07767
.00778
.15460
.01556
.67042
.07767
.94882
.15460
1.00000
.67042
1.00000
.94882
67
Values of K, probability deviations, and BLUE probabilities
for Type B estimator -- a 2 E(.03, .07J
Table 7.22.
~
A
7
.9984
10
20
.9992
.9998
.05
.00011
.00011
.00006
.00006
.00001
.00001
.9984
.9992
.9998
.10
50
1. 0000 +
.00000+
.00000
1.0000 +
.00000+
.00000
.00021
.00023
.00011
.00011
.00003
.00003
.9984
.9992
.9998
.00023
.00060
.00011
.00029
.00003
.00007
.9984
.9992
.9998
.00000+
.00016
.00000+
.00008
.00000+
.00002
.9980
.9991
.9998
.00000
.00000
.00000
.00000
.00000
.00000
N.A.
N.A.
.9998
1.0000 -
.00000
.00000
.00000
.00000
1.0000 -
.50
.00000+
.00001
1.0000 -
1
.00000:
.00000
1.0000 -
5
.00000
.00000
10
Pr(M-ESSSM+l::J
.14128
.09277
.27817
.18429
.92493
.75611
.99963
.98023
1.00000
1.00000
1.00000
1.00000
68
Table 7.23.
~
Values of K, probability deviations, and BLUE probabilities
for TypeB estimator -- a2 e[1, 10]
A
10
20
50
.9324
.9650
.9909
.9985
.00052
.00052
.00026
.00026
.00007
.00007
.00001
.00001
.9325
.9650
.9909
.9985
.00105
.00105
.00052
.00052
.00013
.00013
.00002
.00002
.9335
.9656
.9910
.9985
.00512
.00512
.00257
.00257
.00065
.00065
.00010
.00010
.9372
.9676
.9916
.9986
.00954
.00954
.00477
.00477
.00121
.00121
.00019
.00019
.9405
.9727
.9934
.9989
.00053
.02790
.00027
.01268
.00007
.00305
.00001
.00048
N.A.
N.A.
.9929
.9989
.00000
.00156
.00000
.00023
7
.05
.10
.50
1
5
10
Pr[M-e:Ss$M+e:]
.02460
.00778
.04917
.01556
.24215
.07767
.46252
.15460
.99795
.67042
1.00000
.94882
69
Table 7.24.
~
Values of K, probability deviations, and BLUE probabilities
for Type B estimator -- a 2 El10, 20]
A
10
20
50
.7210
.8222
.9417
.9897
.00161
.00161
.00088
.00090
.00024
.00026
.00004
.00004
.7210
.8222
.9417
.9897
.00321
.00323
.00175
.00180
.00049
.00051
.00008
.00009
.7207
.8221
.9417
.9897
.01597
.01611
.00873
.00897
.00243
.00255
.00040
.00043
.7199
.8218
.9416
.9897
.03145
.03200
.01720
.01781
.00480
.00507
.00080
.00085
.6845
.8103
.9406
.9897
.09632
.13035
.05347
.07155
.01511
.02024
.00252
.00339
N.A.
N.A.
.9366
.9896
.00716
.02006
.00121
.00333
7
.05
.10
.50
1
5
10
Pr[M-E5. (35.M+e:]
.00778
.00550
.01556
.01100
.07767
.05496
.15460
.10967
.67042
.50943
.94882
.83205
70
As in the fixed
0
2
case, it is interesting to consider also the
percent increases in probability which our estimates give.
For Type A
estimates the appropriate criterion appears to be the percentage
increase in closeness over the minimum BLUE probability.
probabilities should then be evaluated at
0
2
1
,
Both of these
For the Type B
estimator a relevant criterion appears to be the minimum percentage
increase in closeness over the BLUE probability.
and hence deviations, are evaluated at
tions are made for the M = 10 case.
7.25 and 7.26.
0
2
0
,
These probabilities,
As before, these calcula-
These results are found in Tables
71
Table 7.25.
K
Percent increase in probability over minimum BLUE
probability for Type A estimator -- M = 10
.05
.10
.50
1
5
[.0001, .01]
0.01%
0.01%
0.00%
0.00%
0.00%
[ .03, .07]
0.09%
0.09%
0.06%
0.01%
0.00%
[.01, .1]
0.13%
0.13%
0.09%
0.03%
0.00%
[.001, 1]
1.06%
1.20%
1.25%
1.13%
0.03%
[.0001, 10]
0.90%
1.35%
4.88%
8.50%
7.98%
[ .1, 10]
8.48%
8.55%
9.24%
10.48%
7.98%
[1, 10]
11. 31%
11.25%
11.24%
11.13%
7.98%
[10, 20]
20.36%
20.27%
20.27%
20.15%
16.95%
Table 7.26.
K
Percent increase in probability over minimum BLUE
probability for Type B estimator -- M = 10
.05
.10
.50
1
5
[.0001, .01]
0.00%
0.00%
0.00%
0.00%
0.00%
[.03, .07]
0.04%
0.04%
0.01%
0.00%
0.00%
[ .01, .1]
0.01%
0.01%
0.00%
0.00%
0.00%
[.001, 1.0]
0.00%
0.00%
0.00%
0.00%
0.00%
[ .0001, 10]
0.00%
0.00%
0.00%
0.00%
0.00%
[ .1, 10]
0.05%
0.05%
0.05%
0.03%
0.00%
[1, 10]
1.06%
1.06%
1.06%
1.03%
0.03%
[10, 20]
11.31%
11.25%
11.24%
11.13%
7.98%
0"-
72
. 8.
SUMMARY, CONCLUSIONS, AND SUGGESTIONS
FOR FUTURE RESEARCH
8.1
Summary
Suppose an investigator is more interested in the "closeness in
probability" of an estimator B for 6, as measured by Pr[6-E:5.B:5.S+e:]
for some predetermined value of E; than he is in unbiasedness of his
estimator for 6.
Then the methods presented here may be useful.
It
is, however, necessary that some knowledge of the parameter space
2
(a ,6) be available, either from the investigator's experience with
the parameters, or from physical and/or mathematical constraints.
Two cases for a2 are considered separately:
222
2
Case I, a
2
and known; Case II, a d aO,a l ] where aO and a l are known.
must have a value M such that:
situation with p 6's:
16 i
l -<
2
fixed
For 6 we
161~ M', in the multiple regression
M i
i
= 1,2, ••• p.
Two decision rules based on the closeness probability are
considered.
The first is to choose as an estimator of 6 that linear
estimator B which maximizes the minimum of the closeness probability.
over the restricted parameter space.
cases.
This was done for both a
2
The second decision rule is to choose as our estimator of
a the
B which maximizes the minimum of the closeness probability minus the
corresponding probability for the BLUE estimator 6.
we choose the B which maximizes the minimum of:
over the restricted parameter space.
In other words,
73
Note that when
0
2
is known, the two estimators are identical.
Hence three essentially different estimators are developed:
with the second decision rule.
For
0
2
The second of these estimators is
referred to as the Type A estimator, while the third (and last) is
referred to as the Type B estimator.
The extension from the single 13 case to the multiple 13 case is
accomplished by requiring that the expected value of the estimator for
13., say, not depend on 13. for j :/- i, i,j
1
J
that, with this
restric~ion,
= 1,2, ••• , p.
It is shown
the derived estimator of 13., say, can
1
be written as K.I3.
where K.E(O,l).
111
Or, in matrix form:
o
B
=
=
B
= D.§.
o
P
K
P
where D is the diagonal p x p matrix of K values.
The value of K. for the fixed
0
1
-1+/1+20.
K.
1
=
d
1
•
1
2
case is:
2
M.+E.
i
where O. = .....-- In [1 1]
1
M.E.
M.-E.
1
1
o d
1
1
where d i is the ith diagonal term of (X'X)-l.
2 2
.. j = 0,1 as
E[oO,ol] and we define 01J
2
2
2
00' and then for 0 = 0'1' respectively. Then i f
For the Type A estimator
the above K. for a
1
2
=
A
~r[M.-EiSO·ll3i<M.+E.]
1
1
- 1
1
0
2
74
evaluated at
02 = Oi
evaluated at
0
2
is less than or equal to this probability
2
= 00~
the appropriate K is oil'
i
Otherwise there will
be a unique value of K in the interval (Y 1'Y O) for which this
i
i
2
2
2
2
probability is the same for 0 = 0 as for 0 = 0 , This K is then the
1
0
desired value of K.•
l.
For the Type B estimator the appropriate value of K is 0iO if
i
Pr[M.-e:
l. l.. .$.O·OS.<M.+e:.J
l. l.- l. l. - Pr[Ml..-e:l.. .$.Sl..,SMl..+e:l..J
evaluated at
0
2
= 0 02
is less than or equal to this probability
difference evaluated at
.
t h e same f or
0
2
:i::
2
=
Otherwise K. will be the unique
l.
(oil~oiO)
value in the interval
l.S
0
0
2 as f or
0
for which this probability difference
0
8.2
2
= 0 12 ,
Conclusions
In order for the derived estimator to produce any improvement in
c1oseness~
or even any change at
multiple of
S~
a11~
it is obvious that K, the
must be appreciably less than 1.
Otherwise the
estimator will be essentially the same as S, and have essentially the
same properties.
variance of
obtained.
example.
8~
It therefore appears that the larger is do
2
,
the
the greater is the potential improvement which can be
For the fixed
0
2
As the value of a
case this is seen to hold in the numerical
2
increases, the relative improvement in
the closeness probability is seen to increase.
For the Type A and Type B estimators we can say that as
0
2
1
increase~
0
2
0
and
the relative improvement tends to increase, since K. must
be in or on the boundary of the interval:
increase these boundary values decrease.
l.
2
[Yil,YiO]' and as 0 0 and
0
2
1
7S
Also, as M decreases the calculated K values decrease as well,
which agrees with our intuitive feeling that the more information
available on a parameter the better we should be able to estimate it.
It is also seen that as we permit our 2£ interval to become wider, the
potential relative improvement increases.
In a practic&l setting
0
2 2
0 ,0 1 and M will be fixed, but there may
be some flexibility in the choice of £.
If a specific £ is not required
by the investigator, but rather a range of possible £ values,
could be compiled to compare percentage gains with different
If, after making all the calculations for a fixed set of
table~
E
values.
O~,Oi,M,
and £, the value of K is calculated to be effectively 1, this does not
indicate failure; it simply says that the BLUE estimator does a good
job, and in the sense of our criterion, it is best.
8.3
8.3.1
Suggestions for Future Research
XIX Matrix of Less than Full Rank
The analysis performed was dependent on the X matrix being of full
rank.
If, as if often the case with design matrices, it is not of
full rank, we still might wish to obtain an estimator based on closeness probability.
This problem is intimately related to the linear
constraints usually imposed on the parameters of experimental design
models.
8.3.2
Transformation of Parameters
The bounding of the SIS by absolute values might be improved upon
in the situation where one knows that S£[SO,Sl]' where 0
A linear transformation on S such as:
<
So
<
Sl'
76
could possibly feduce the value of M considerably.
8.3.3
Alternate Decision Procedure
A different
dec~sion
procedure might be used, while still being
based on the closeness probability.
One such procedure might be
Bayesian with specified prior distributions over the restricted
parameter space.
77
9.
LIS! OF REFERENCES
1.
Dantzig, G. B. 1940. On the non-existence of tests of "Students"
hypothesis having pow~r functions independent of cr. Ann.
Math. Statist. 11:186-192.
2.
Draper, N. R., and H. Smith. 1966. Applied Regression Analysis.
John Wiley and Sons, Inc., New York.
3.
Stein, C. If. 1945. A two sample test for a linear hypothesis
whose power is independent of the variance. Ann. Math.
Statist. l6:243-2~8.
4.
Webster, J. T. 1965. On the use of a biased estimator in linear
regression. J. Indian Statist. A~s. 3:82-90.
78
10.APPEN~IC:ES
79
10.1 Appendix A - Computer Program to Calculate
.
;
K for
rhis program,
wri~ten
fi
Fixed
in G.E. Time-Sharing Fortran as used by
Call-a-Computer, Inc., calculates:
A
1)
Pr[M-E~aSM+E],
denoted as BLUE probability
2) K - the maximizing multiple of a
produced by K
for all combinations of n diffefent € values, and m different M
values, such that M > E.
The number of different
designated online 2 as II.
In this case II
11 the 6 values of E are defined.
is written as:
ALPHA(I).
€
= 6.
values is
On lines 6 through
Note that in the program, the ith €
Any '/lumber of €'s up to 20 can be used.
The number of different M values is designated on line 3 as JJ.
In this case JJ
= 4,
and the 4 m values are given on lines 14 to 17.
Note that the ith M is denoted as Bl(I).
The ith diagonal term of
the (X'x)-lmatrix is given on litleS, and i's denoted as DD.
Finally, the number of di1;ferent ,} values being evaluated is
designated on line 4, and is denoted by NI.
written into the program, as the
€
The cr
2
values are not
and M values are, but are input
upon running the program.
They could, however, just as well have been
written into the program.
The program follows.
80
1 DIMENSION ALPHA(20),SIGO(20),B1(20)
24 II=6
3 JJ=4
4 NI=2
5 DD=2.5
6 ALPHA(1)=0.05
7 ALPHA(2)=0.10
8 ALPHA(3)=0.50
9 ALPHA (4) =1. 0
10 ALPHA(5)=5.0
11 ALPHA(6)=10.0
12INPUT,(SIGO(K),K=1,NI)
13 PRINTtttt
.
14 B1(1)=7
15 B1(2)=10
16 B1(3)=20
17 B1(4)=50
18 DO 40 K=l,NI
19 PRINT ttt
20 PRINT,"FOR SIGMA SQUARED=",SIGO(K)
21 PRINT ttt
22 DO 30 J=l,II
23 DO 27 IK=l,JJ
24 IF (B1(IK)-ALPHA(J» 26,26,67
25 67 SIG=SIGO(K)
26 16 ZAP=DD*SIG*(LOG(B1(IK)+ALPHA(J»-LOG(B1(IK)-ALPHA(J»)
27 ZFAC=B1(IK)*ALPHA(J)
28 AAO=ZAP/ZFAC
29 IF (AAO-.00001) 117,117,121
30 117 DDO=1-(AAO/2)+«AAO**2)/2)-«(AAO**3)*S)/8)
31 ++( «AAO**4) *7) /8)
32 GO TO 327
33 121 CONTINUE
34 DDO =1- (AAO/2)+( (AAO**2) /2) - « (AAO**3) *5) /8)
35 ++« (AAO**4) *7) /8) -( «AAO**5) *21) /16)
36 ++( «AAO**6) *33) /16) - « (AAO**7) *429) /128)
37 ++( «AAO**8) *715) /128)-( «AAO**9) *2431) /256)
38 ++«(AAO**10)*4199)/256)-«(AAO**11)*88179)/3072)
39 ++«(AAO**12)*52003)/1024)-«(AAO**13)*1300075)/14336)
40 ++« (AAO**14) *334305) /2048) - «(AAO**~5) *9694845) /32768)
41 IF (AAO-.01) 327,327,118
42 118 CONTINUE
43 IF(f( (AAO**16)*(17678835»/(32768»-.000000000l) 327,327,331
44 331 IEXP=16
45 COE FF= «9694845) * (2*IEXP-1» / «32768) * (IEXP+1»
46 333 DAR=COEFF*(AAO**IEXP)*«-l)**IEXP)
47335 DDO=DDO+DAR
48 IEXP=IEXP+1
49 DAR= DAR*AAO* (2*IEXP-1)* (-1) / (IEXP+1)
50 DARA=ABS (DAR) •
51 IF(DARA-10) 213,214,214
52 213 IF(DARA-.0000001) 327,327,335
53 214 DDO=(-ZFAC+SQRT(ZFAC*(ZFAC+2*ZAP»)/ZAP
81
54 327 D=l
55 IGG=O
56 39 SO=SQRT(DD*SIGO(K»
57 UO=(B1(IK)+ALPHA(J)-D*B1(IK»/(D*SO)
58 ALO=(B1(IK)-ALPHA(J)-D*B1(IK»/(D*SO)
59 IF(UO-10) 816,817,817
60 817 IF(ALO+10) 814,814,815
61 814 PROBOT=O
62 GO TO 720
63 815 PROBOT=GP(ALO)
64 GO TO 720
65 816 IF(ALO+10) 824,824,825
66 824 PROBOT=l-GP(UO)
67 GO TO 720
68 825 PROBOT=l-GP(UO)+GP(ALO)
69 720 IF(IGG-1) 862,863,863
70 862 PRINT,"FOR EPSILON =",Al.PHA(J),"AND M =",B1(IK)
71 PRINT,"BLUE PROBABILITY IS :"
72 PBO=l-PROBOT
73 PRINT 6,PBO
74 IGG=l
75 D=DDO
76 GO TO 39
77 863 PRINT,"MAXI-MIN SOLUTION K="
78 PRINT 6 ,DDO
79 PEO=l-PROBOT-PBO
80 PRINT,"IMPROVEMENT IN PROBABILITY FOR DERIVED ESTIMATOR IS :"
81 PRINT 6,PEO
82 6 FORMAT(10X,F11.9)
83 26 CONTINUE
84 27 PRINT 1'tt
85 30 CONTINUE
86 40 CONTINUE
87 PRINT 1'ttH
88 GO TO 4
89 END
90 FUNCTION GP(X)
91 GP=O
92 y=x*x
93 IF(324-Y)2
94 Z=.3989422804*EXP(-Y/2)
95 IF(Y-9)3
96 Y=ll/X
97 D01C=1,10
98 1Y=(11-C)/(Y+X)
99 GP=-Z/(Y+X)
1002IF(X)5
101 GP=l+GP
102 GO TO 5
1033GP=Z
]04 C=l
1054C=C+2
106 z=z*y /C
82
107 GP=GP+Z
108 IF(GP-Z*lE8)4
109 GP=.,5+X*GP
1105
III END
83
10.2
Appendix B -
Comp~ter
K for Type
Program to Calculate
A E$timator
For this program values of € and M are introduced exactly as in
2
.
2
2
the fixed 0 case of Appen9ix A. Any number of 00 and 0 values can
1
be evaluated at oqe time.
va1ues t while
2
TP8 symbol IN designates. the number of 00
JNc;1~signates the number of
of values.
This program calculates:
1) Pr[~€~8~M+€] first for 0
2
= O~t
and then for 0
2
= oi
2
2) The values of K that would maximize: Pr[M-€~K8~M+€] for 0 =
222
00t and 0 = 01 , respectively. These values are the upper and
lower bounds on K.
3) The value of K which produces the maxi-min solution,
•
or otherwise
iterativ~
84
1 DIMENSION ALPHA(lS),SIGO(lS),SIGl(IS),B1(1S)
2 4 II=6
3 JJ=4
4 IN=l
S IN=l
6 DD=2.5
7 ALPHA(1)=0.05
8 ALPHA(2)=0.10
9 ALPHA(3)=0.50
10 ALPHA(4)=L 0
11 ALPHA(5)=5.0
12 ALPHA(6)=10.0
13INPUT,(SIGO(K),K=1,IN)
14 INPUT,(SIG1(IJ),IJ=1,JN)
15 PRINT Htt
16 B1(1)=7
17 B1(2)=10
18 B1(3)=20
19 B1(4)=50
20 MID=O
21 DO 40 K=l,IN
22 DO 35 IJ=l,JN
23 PRINT ttt
24 PRINT,"FOR SIGO =",SIGO(K),"AND BIG! =",SIG1(IJ)
25 PRINT H
26 DO 30 J=l,II
27 DO 27 IK=l,JJ
28 IF (B1(IK)-ALPHA(J» 26,26,67
29 67 IF(SIGO(K)-SIG1(IJ» 15,35,35
30 15 SIG=SIGO(K)
31 ITT=O
32 16 ZAP=DD*SIG*(LOG(B1(IK)+ALPHA(J»-LOG(B1(IK)-ALPHA(J»)
33 ZFAC=B1(IK)*ALPHA(J)
34 AAO=ZAP/ZFAC
35 IF(AAO-.OOOOl) 117,117,121
36 117 DDO=1-(AAO/2)+«AAO**2)/2)-«(AAO**3)*5)/8)
37 ++«(AAO**4)*7)/8)
38 GO TO 327
39 121 CONTINUE
40 DDO=1-(AAO/2)+«AAO**2)/2)-«(AAO**3)*5)/8)
41 ++«(AAO**4)*7)/8)-«(AAO**5)*21)/16)
42 ++«(AAO**6)*33)/16)-«(AAO**7)*429)/128)
43 ++« (AAO**8) *715) /128) -( «AAO**9) *2431) /256)
44 ++«(AAO**10)*4199)/256)-«(AAO**11)*88179)/3072)
45 ++«(AAO**12)*52003)/1024)-«(AAO**13)*130007S)/14336)
46 ++«(AAO**14)*334305)/2048)-«(AAO**15)*9694845)/32768)
47 IF(AAO-.01) 327,327,118
48 118 CONTINUE
49 IF««AAO**16)*(17678835»/(32768»-.OOOOOOOQ01) 327,327,331
50 331 IEXP=16
51 COEFF=«9694845)*(2*IEXP-1»/«32768)*(IEXP+1»
52 333 DAR=COEFF*(AAO**IEXP)*«-l)**IEXP)
53 335 DDO=DDO+DAR
54 IEXP=IEXP+1
85
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
DAR=DAR*AAO*(2*IEXP-l)*(-1)/(IEXP+1)
DARA=ABS(DAR)
IF (DARA-I0) 213,214,214
213 CONTINUE
IF(DARA-.OOOOOOl) 367,367,335
214 DDO=(-ZFAC+SORT(ZFAC*(ZFAC+2*ZAP»)/ZAP
367 CONTINUE
327 IF(ITT) 81,81,82
81 ITT=l
SIG=SIGl(IJ)
PP=DDO
GO TO 16
82 DDl=DDO
DDO=PP
609 IF(DDO-DDI-0.0000001) 616,616,610
616 PRINT, "EQUAL"
PRINT 6,DDO,DDI
GO TO 26
610 D=1
611 SI=SORT(DD*SIG1(IJ»
SO=SORT(DD*SIGO(K»
IGG=O
39 U1=(Bl(IK)+ALPHA(J)-D*Bl(IK»/(D*S;L)
ALl=(Bl(IK)-ALPHA(J)-D*B1(IK»/(D*Sl)
UO=(Bl(IK)+ALPHA(J)-D*B1(IK»/(D*SO)
ALO=(Bl(IK)-ALPHA(J)-D*B1(IK»/(D*SO)
IF(UI-I0) 716,716,717
717 IF(ALl+10) 714,714,715
714 PROBIT=O
GO TO 719
715 PROBIT=GP(ALl)
GO TO 719
716 IF(AL1+10) 724,724,725
724 PROBIT=1-GP(U1)
GO TO 719
725 PROBIT=l-GP(Ul)+GP(ALl)
719 CONTINUE
IF(UO-10) 816,817,817
817 IF(ALO+10) 814,814,815
814 PROBOT=O
GO TO 720
815 PROBOT=GP(ALO)
GO TO 720
816 IF(ALO+10) 824,824,825
824 PROBOT=I-GP(UO)
GO TO 720
825 PROBOT=l-GP(UO)+GP(ALO)
720 CONTINUE
721 IF(IGG-1) 862,863,864
862 PRINT,"FOR EPSILON=",ALPHA(J),"A,ND M=",Bl(IK)
PRINT,"BLUE PROBABILITIES ARE :"
PBO=l-PROBOT
PBl=l-PROBIT
PRINT 6,PBO,PBl
86
109
110
111
112
IGG=l
D=DD1
PRINT» "THE UPPER AND LOWER BOUNDS FOR K ARE:"
PRINT 6,DDO»DDl
113 GO TO 39
114 863 CONTINUE
115 IF(PROBOT-PROBIT) 427,427 t 428
116 427 PRINT. t1 MAXI-HIN TYPE A SOLUTION
K ="
117 PRINT 7 t DD1
118 7 FORMAT(10X»F11.9)
119 PEO=1-PROBOT-PBO
120 PE1=1-PROBIT-PB1
121 PRINT»"DIFFERENCES IN PROBABILITIES FOR DERIVED ESTIMATOR ARE :"
122 PRINT 6»PEO»PE1
123 GO TO 26
124 428 DDU=DDO
125 DDL=DD1
126 868 DDT=(DDU+DDL)/2
127 MID=MID+1
128 IF(DDU-DDL-0.0000001) 898»898»899
129 898 PRINTt"MAXI-MIN ITERATIVE TYPE A SOLUTION
K ="
130 PRINT 6 t DDL»DDU
131 6 FORMAT(2(10X,F11.9»
132 PRINT t "NUMBER OF ITERATIONS = ",MID
133 998 PEO=1-PROBOT-PBO
134 PE1=1-PROBIT-PB1
135 PRINT» "DIFFERENCES IN PROBABILITIES FOR DERIVED ESTIMATOR ARE ."
136 PRINT 6»PEO,PEl
137 MID=O
138 GO TO 26
139 899 CONTINUE
140 IGG=2
141 D=DDT
142 GO TO 39
143 864 IF(PROBOT-PROBIT) 873,874,875
144 873 DDU=DDT
145 GO TO 868
146 875 DDL=DDT
147 GO TO 868
148 874 PRINT»"PROBABILITYEQUALITY K;::::"
149 PRINT 7,DDT
150 PRINT, MID
151 MID=O
152 GO TO 998
153 26 CONTINUE
154 27 PRINT tH
155 30 CONTINUE
156 35 CONTINUE
157 40 CONTINUE
158 PRINT HtH
159 GO TO 4
160 END
161 FUNCTION GP(X)
162 GP=O
87
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
Y=X*X
IF(324-Y)2
Z=.3989422804*ExP(-Y/2)
IF(Y-9)3
Y=ll/X
D01C=1,10
1Y=(11-C)/(Y+X)
GP=-Z/(Y+X)
2IF(X)5
GP=l+GP
GO TO 5
3GP=Z
C=l
4C=C+2
Z=Z*y/C
GP=GP+Z
IF(GP-Z*lE8)4
GP=.5+X*GP
5
END
88
10.3
Appendix C - Computer Program to Calculate
K for Type B Estimator
The introductio~ of e, M, d, and
well as the output of
Appendix B.
t~e
02
values to this program, as
program, is exactly as in the program of
The program.fol1ows.
89
1 DIMENSION ALPHA(15),SIGO(15),SIG1(15),B1(15)
2 4 II=6
3 JJ=4
4 IN=1
5 IN=1
6 DD=2.5
7 ALPHA(1)=0.05
8 ALPHA(2)=0.10
9 ALPHA(3)=0.50
10 ALPHA (4) =1. 0
11 ALPHA(5)=5.0
12 ALPHA(6)=10.0
13INPUT,(SIGO(K),K,=1,IN)
14INPUT,(SIG1(IJ),IJ=1,JN)
15 PRINT HH
16 B1(1)=7
17 B1(2)=10
18 B1(3)=20
19 B1(4)=50
20 MID=O
21 NEX1=0
22 NEX2=0
23 IEZ1=0
24 IEZ2=0
25 DO 40 K=1,IN
26 DO 35 IJ=1,JN
27 PRINT Ht
28 PRINT,"FOR,. SIGO =",SIGO(K),"AND SIG1 =",SIG1(IJ)
29 PRINT tt
30 DO 30 J=1,II
31 DO 27 IK=1,JJ
32 IF (B1(IK)-ALPHA(J» 26,26,67
33 67 IF(SIGO(K)-SIG1(IJ» 15,35,35
34 15 SIG=SIGO(K)
35 ITT=O
36 16ZAP=DD*SIG*(LOG(B1(IK)+ALPHA(J»-LOG(B1(IK)-ALPHA(J»)
37 ZFAC=B1(IK)*ALPHA(J)
38AAO=ZAP/ZFAC
39 IF(AAO-.00001) 117,117,121
40 117 DDO=1-(AAO/2)+( (AAO**2) /2)-« (AAO**3) *5) /8)
41 ++«(AAO**4)*7)/8)
42 GO TO 327
43 121 CONTINUE
44 DDO=1-(AAO/2)+«AAO**Z)/2)-«(AAO**3)*5)/8)
45 ++«(AAO**4)*7)/8)-«(AAO**5)*21)/16)
46 ++( «AAO**6) *33) /16)-( «AAO**n*429) /128)
47 ++( «AAO**8)*715) /128)-( «MO**9)*2431) /256)
48 ++«(AAO**10)*4199)/256)-«(MO**11)*88179)/3072)
49 ++«(AAO**12)*S2003)/1024)-«(AAO**13)*1300075)/14336)
50 ++«(AAO**14)*334305)/2048)-«(AAO**15)*9694845)/32768)
51 IF(AAO-.01) 327,327,118
52 118 CONTINUE
53 IF( « (AAO**16) * (17678835» / (32768» -.0000000001) 327,327,331
54 331 IEXP=16
90
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
COEFF=«9694845)*(2*IEXP-1))/«32768)*(IEXP+1))
333 DAR=COEFF*(AAO**IEXP)*«-1)**IEXP)
335 DDO=DDO+DAR
IEXP=IEXP+1
DAR=DAR*ZZO*(2*IEXP-1) * (-1)/(IEXP+1)
DARA=ABS(DAR)
IF (DARA-10) 213,214,214
213 CONTINUE
IF(DARA-.0000001) 367,367,335
214 DDO=(-ZFAC+SORT(ZFAC*(ZFAC+2*ZAP)))/ZAP
367 CONTINUE
327 IF(ITT) 81,81,82
81 ITT=1
SIG=SIG1(IJ)
PP=DDO
GO TO 16
82 DD1=DDO
DDO=PP
609 IF(DDO-DD1-0.0000001) 616,616,610
616 PRINT, "EQUAL"
PRINT 6,DDO,DD1
GO TO 26
610 D=1
611 S1=SORT(DD*SIG1(IJ))
SO=SORT(DD*SIGO(K))
IGG=O
39 Ul=(B1(IK)+ALPHA(J)-D*B1(IK))/(D*S1)
AL1=(B1(IK)-ALPHA(J)-D*B1(IK))/(D*S1)
UO=(B1(IK)+ALPHA(J)-D*B1(IK))/(D*SO)
ALO=(Bl(IK)-ALPHA(J)-D*B1(IK))/(D*SO)
IF(U1-10) 716,717,717
717 IF(AL1+10) 714,714,715
714 PROBIT=O
GO TO 719
715 PROBIT=GP(AL1)
GO TO 719
716 IF(AL1+10) 724,724,725
724 PROBIT=1-GP(U1)
GO TO 719
725 PROBIT=1-GP(&1)+GP(AL1)
719 CONTINUE
IF(UO-10) 816,817,817
817 IF(ALO+10) 814,814,815
814 PROBOT=O
GO TO 720
815 PROBOT=GP(ALO)
GO TO 720
816 IF(ALO+10) 824,824,825
824 PROBOT=l-GP(UO)
GO TO 720
825 PROBOT=1-GP(UO)+GP(ALO)
720 IF(IGG-1) 41,46,48
41 PRINT,"FOR EPSILON =",ALPHA(J),"AND M =",B1(IK)
PRINT,"BLUE PROBABILITIES ARE :"
91
109 PBO=l-PROBOT
110 PB1=1-PROBIT
111 PRINT 6,PBO,PB1
112 IGG=l
113 D=DDO
114 PRINT, "THE UPPER AND LOWER BOUNDS FOR K ARE:"
115 PRINT 6,DDO,DD1
116 GO TO 39
117 46 CONTINUE
118 PZO=l-PROBOT-PBO
119 PZ1=1-PROBIT-PB1
120 IF(PZO-PZ1) 238,238,239
121 238 CONTINUE
122 PRINT,"MAXI-MIN TYPE B SOLUTION: K="
123 PRINT 7,DDO
124 PRINT,"FOR THE DERIVED ESTIMATOR, THE PROBABILITY DIFFERENCES ARE:"
125 PRINT 6,PZO,PZ1
126 GO TO 26
127 239 IGG=2
128 DDU=DDO
129 DDL=DD1
130 888 DDT=(DDU+DDL)/2
131 MID=MID+1
132 IF(DDU-DDL-0.00000001) 874,874,875
133 874 CONTINUE
134 PRINT, "THE MAXI-MIN ITERATIVE TYPE B SOLUTION IS BETWEEN :"
135 PRINT 6,DDU,DDL
136 PRINT,"THE PROBABILITY DIFFERENCES ARE: "
137 PRINT 6,PZOO,PZll
138 PRINT, "NUMBER OF ITERATIONS ="
139 PRINT ,MID
140 MID=O
141 GO TO 26
142 875 D=DDT
143 GO TO 39
144 48 PZOO=l-PROBOT-PBO
145 PZ11=1-PROBIT-PB1
146 IF(PZOO-PZ11) 208,209,210
147 208 DDL=DDT
148 GO TO 888
149 210 DDU=DDT
150 GO TO 888
151 209 CONTINUE
152 PRINT,"PROBABILITY EQUALITY, K="
153 PRINT 7,DDT
154 PRINT,"NUMBER OF ITERATIONS ="
155 PRINT,MID
156 MID=O
157 PRINT,"THE PROBABILITY DIFFERENCES ARE:"
158 PRINT 6,PZOO,PZ11
159 7 FORMAT(10X,F11.9)
160 6 FORMAT(2(10X,F11.9»
161 26 CONTINUE
162 27 PRINT tH
92
l63
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
30 CONTINUE
35 CONTINUE
40 CONTINUE
PRINT HHt
GO TO 4
END
FUNCTION Gp(X)
GP=O
Y=X*X
IF(324-Y)2
Z=.3989422804*EXP(-Y/2)
IF(Y-9)3
Y=l1/X
D01C=1,10
1Y=(11-C)/(Y+X)
GP<i:-Z/ (Y+X)
2IF(X)5
GP=l+GP
GO TO 5
3GP=Z
C=l
4C=C+2
Z=Z*Y/C
GP=GP+Z
IF(GP-Z*lE8)4
GP=.5+X*GP
5
END
© Copyright 2026 Paperzz