Let X be a random sample with joint pm/df f X(x|θ).

82
CHAPTER 4. MAXIMUM LIKELIHOOD ESTIMATION
Definition:
Let X be a random sample with joint p.m/d.f. fX (x|θ). The generalised
likelihood ratio test (g.l.r.t.) of the N H : θ ∈ H0 against the alternative
AH : θ ∈ H1 , given the observed value x, rejects the N H at α -level of
significance whenever
³ ´
θ̈
L
supθ∈H0 fX (x|θ)
= ³ ´ <c
Λ (x) =
supθ∈Θ fX (x|θ)
L θ̇
where c is such that
sup Pr (Λ (X) < c) = α
θ∈H0
Thus the g.l.r.t. has rejection region
³ ´




θ̈
L
supθ∈H0 fX (x|θ)
R = x : Λ (x) =
= ³ ´ <c


supθ∈Θ fX (x|θ)
L θ̇
Example: Consider the last example again in which we wish to test whether
production in two plants is uniform i.e. test
N H : θ1 = θ2 against AH : θ1 6= θ2
where θi is the parameter of the exponential distribution of the component
failure times produced in plant i, i = 1, 2. Find the g.l.r.t. procedure in this
case.
Solution: We have seen that the likelihood is
L (θ) = fX (x|θ) = θ1n e−θ1 T θ2m e−θ2 S
P
P
where T = ni=1 zi and S = m
i=1 yi , and the log-likelihood is
ℓ (θ) = n log θ1 − θ1 T + n log θ2 − θ2 S
Hence


n
∂ℓ (θ)
n

−T =0 
=0 
θ̇1 =
θ
∂θ1
T
⇔ m1
⇒
m
∂ℓ (θ)


−
S
=
0
θ̇
=

2
=0
θ2
S
∂θ2
4.4. RESTRICTED M.L.E AND THE G.L.R.T.
i.e. θ̇ =
µ
n/T
m/S
¶
83
and
³ ´
³ n ´n n ³ m ´m m
e− T T
e− S S
L θ̇ =
T
S
nn mm −(n+m)
=
e
T nS m
We have already seen that when the NH is true the restricted m.l.e.’s are
and hence
θ̈1 = θ̈2 = n+m
T +S
µ
¶n
¶m
µ
³ ´
n+m
n+m
n
+
m
− n+m
T
L θ̈ =
e T +S
e− T +S S
T +S
T +S
¶n+m
µ
n+m
e−(n+m)
=
T +S
The g.l.r.t. statistic is therefore
µ
¶n+m
³ ´
n+m
e−(n+m)
L θ̈
T nS m
T +S
= constant
Λ (x) = ³ ´ =
n m
n m −(n+m)
(T + S)n+m
L θ̇
e
T nS m
and the rejection region of the g.l.r.t. is
R = {x :Λ (x) < c}
¾
½
T nS m
<c
=
x :constant
(T + S)n+m
½
¾
T nS m
′
=
x:
<c
(T + S)n+m
¾
½
(T /S)n
′
<c
=
x:
(1 + T /S)n+m
½
¾
Un
′
=
x:
<c
(1 + U )n+m
where
Pn
T
i=1 zi
U = = Pm
S
i=1 yi
84
CHAPTER 4. MAXIMUM LIKELIHOOD ESTIMATION
n
n+m
n
U /(1+U)
n+m
Plot of the statistic U /(1+U)
c’
0
0
u
u1
2
U
Figure 4.8: The condition U n /(1 + U )n+m < c′ is equivalent to U < u1 and
U > u2 .
From the figure above we see that the rejection region can also be written as
R = {x :U < u1 or U > u2 }
o
n
m
m
= x :U < v1 or U > v2
n
n
= {x :V < v1 or V > v2 }
T /n
z̄
average of the z-values
= =
. To find v1 and v2
S/m
ȳ
average of the y-values
we must know the distribution of V when the NH is true. Since the Zi ’s are
exponentially distributed with parameter θ1 then
=
where V = U m
n
T =
n
X
i=1
Zi ∼ Gamma (n, θ1 )
i.e. 2θ1 T ∼ χ22n . Similarly 2θ2 S ∼ χ22m . Since the Zi ’s and Yi ’s are independent
2θ1 T /n
χ2 /2n
∼ F2n,2m
∼ 22n
2θ2 S/m
χ2m /2m
When the NH is true θ1 = θ2 and hence
V =
2θ1 T /n
T /n
=
∼ F2n,2m
S/m
2θ2 S/m
85
4.4. RESTRICTED M.L.E AND THE G.L.R.T.
The values v1 and v2 determining the rejection region can therefore be obtained from the F2n,2m tables such that
Pr (v1 ≤ F2n,2m ≤ v2 ) 1 − α
Example: Let X be a random sample of n independent observation from
the Bin (m, θ) distribution. Find the g.l.r.t. of N H : θ ≤ θ′ against the
alternative AH : θ > θ′ for some fixed and known value θ′ .
Solution: The likelihood is
n
Y
m xi
θ (1 − θ)m−xi ∝ θt (1 − θ)nm−t
L (θ; x) =
xi
i=1
Pn
where t = i=1 xi . Consequently
sup L (θ; x) ∝
θ∈Θ
=
and
Thus
sup θt (1 − θ)nm−t
θ∈(0,1)
µ
t
nm
¶t µ
t
1−
nm
sup L (θ; x) ∝ sup θt (1 − θ)nm−t
θ∈H0
θ≤θ′
 ¡ t ¢t ¡
¢
t nm−t
 nm 1 − nm
=
 ′ t
(θ ) (1 − θ′ )nm−t
sup L (θ; x)
Λ (x) = Λ (t) =
θ∈H0
sup L (θ; x)
θ∈Θ
=


 1

 ¡
′ t
¶nm−t
if
t
nm
≤ θ′
if
t
nm
> θ′
(θ ) (1 − θ )
¢ ¡
¢
t nm−t
t t
1 − nm
nm

 1µ
¶t µ
¶nm−t
=
(nm − nmθ′ )
nmθ′

t
(nm − t)
if
t
nm
≤ θ′
if
t
nm
> θ′
′ nm−t
if t ≤ nmθ′
if t > nmθ′
Figure 4.9 gives the plot of Λ (t) from which it can be seen that it is a
non-decreasing function of t and consequently the rejection region is given
by
R = {x : Λ (t) < cα }
= {x : t > kα }
86
CHAPTER 4. MAXIMUM LIKELIHOOD ESTIMATION
Likelihood function
G.l.r.t. statistic
g.l.r.t. Λ(t)
likelihood L(θ)
1
c
0
t/nm
0
α
0
1
θ
nmθ’
t
0
kα
nm
Figure 4.9: The likelihood of the binomial data and the g.l.r.t statistic
where kα is such that
sup Pr (X ∈R) = sup Pr
θ≤θ′
θ≤θ′
Pn
à n
X
Xi > kα
i=1
!
= α = level of significance
But i=1 Xi ∼ Bin (nm, θ) . Further, since Pr (Y > k|θ) is increasing in θ
when Y ∼ Bin (r, θ) , it follows that
!
!
à n
à n
X
X
Xi > kα |θ = θ′
Xi > kα = Pr
sup Pr
θ≤θ′
and kα satisfies
i=1
i=1
nm
X
j=ka +1
nm ′ j
nm−j
(θ ) (1 − θ′ )
=α
j
Note that there may not exist integer kα which satisfies the above equation.
If it does not, we then choose kα such that
nm
X
j=ka +1
nm ′ j
nm−j
(θ ) (1 − θ′ )
< α and
j
nm
X
nm ′ j
nm−j
(θ ) (1 − θ′ )
>α
j
j=ka
87
4.5. ASYMPTOTIC FORM OF THE G.L.R.T.
i.e. kα is the smallest integer for which the is less than α probability that a
Bin (nm, θ′ ) random variable will exceed it.
If n is large, we can use the Normal approximation to the Bin (nm, θ′ )
distribution and hence obtain kα as the solution to
Pr (N (nmθ′ , nmθ′ (1 − θ′ )) > kα ) = α
i.e.
Ã
Pr N (0, 1) > p
i.e.
p
kα − nmθ′
nmθ′ (1 − θ′ )
= zα
kα − nmθ′
nmθ′ (1 − θ′ )
!
or kα = nmθ′ + zα
=α
p
nmθ′ (1 − θ′ )
where zα is such that Φ (zα ) 1 − α, Φ (.) being the Standard Normal distribution function. Consequently the rejection region is
)
(
Pn
′
x
−
nmθ
i
> zα
R = x : pi=1
nmθ′ (1 − θ′ )
4.5
Asymptotic form of the g.l.r.t.
Result: In testing the NH : θ ∈ H0 against the alternative AH ; θ ∈ Θ − H0
where
H0 = {θ : h1 (θ) = 0, h2 (θ) = 0, . . . , hr (θ) = 0} ,
provided the sample size on which the test is based is large, then under mild
regularity conditions
−2 log Λ (X) = −2 log
L(θ̈)
= 2[ℓ(θ̈) − ℓ(θ̇)]
L(θ̇)
is approximately chi-squared distributed with r degrees of freedom when the
null hypothesis N H is true. The critical region of the g.l.r.t can therefore be
taken as
©
ª
R = x : −2 log Λ (x) ≥ χ2r,α
where χ2r,α
from the chi-squared tables and is such that if W ∼ χ2r
¢
¡ is taken
then Pr W ≥ χ2r,α = α and α = level of significance of test. The degrees
88
CHAPTER 4. MAXIMUM LIKELIHOOD ESTIMATION
of freedom are equal to the number of independent side conditions used to
specify the null hypothesis. (Note that −2 log Λ (x) ∼ χ2r independently of
θ as long as θ ∈ H0 . In the above Λ (x) is the g.l.r.t. statistic, L(θ) is the
likelihood function, ℓ(θ) is the lo-likelihood function, θ̇ the m.l.e. of θ and
θ̈ the restricted m.l.e. in H0 .
Sketch of proof: Since
³ ´
sup L (θ)
L θ̈
θ ∈H0
Λ (x) = ³ ´ =
sup L (θ)
L θ̇
θ ∈Θ
we have that
³ ´o
n
³ ´
−2 log Λ (x) = 2 log L θ̇ − log L θ̈
³ ´o
n ³ ´
= 2 ℓ θ̇ − ℓ θ̈
where θ̇ is the m.l.e of θ and θ̈ is the restricted m.l.e. However, we have
˙ and θ̈ emerge as turning points of the likelihood L (θ)
seen that when bmθ
(and hence of the log-likelihood ℓ (θ)) and when the null hypothesis is true,
θ̇
³ ´
and θ̈ are close to each other with high probability. Hence expanding ℓ θ̈
about θ̇ we get to a second order approximation
³ ´

k ³
 ³ ´
´ ∂ℓ θ̇
³ ´ X
θ̈i − θ̇i
−2 log Λ (x) = 2 ℓ θ̇ − ℓ θ̇ −
−

∂θ
i
i=1
³ ´
2
k X
k ³
´
´
³
θ̇
∂
ℓ
X
1
θ̈i − θ̇i θ̈j − θ̇j
−
2 i=1 j=1
∂θi ∂θj
³ ´
k ³
k X
´ ∂ 2 ℓ θ̇
´³
X
(4.19)
θ̈i − θ̇i θ̈j − θ̇j
= −
∂θi ∂θj
i=1 j=1
By arguments similar to those that produced 4.8 we have by the law of large
numbers
¶
µ 2
∂ ℓ (θ)
∂ 2 ℓ (θ)
= −I ij (θ)
as n → ∞
→E
∂θi ∂θj
∂θi ∂θj
89
4.5. ASYMPTOTIC FORM OF THE G.L.R.T.
with I ij (θ) the (i, j)th element of the Fisher Information matrix I (θ). Hence
for large n 4.19 is further approximated by
−2 log Λ (x) ≈
=
k ³
k X
X
i=1 j=1
³
θ̈ − θ̇
θ̈i − θ̇i
´T
´³
´ ³ ´
θ̈j − θ̇j Iij θ̇
´
³ ´³
I θ̇ θ̈ − θ̇
(4.20)
Finally since θ̇ and θ 0 are close to each other with high probability we can
introduce the further approximation
³
´T
³
´
−2 log Λ (x) ≈ θ̈ − θ̇ I (θ0 ) θ̈ − θ̇
But since, as we have seen, when the null hypothesis is true both θ̇ and θ̈
are multivariate Normally distributed and hence so is θ̈ − θ̇ it follows that
−2 log Λ (x) is approximately a quadratic in a Normally distributed random
vector of zero mean vector. It can be shown that such a quadratic is chisquared distributed. (This is an extension of the result that if W ∼ N (0, σ 2 )
then W 2 /σ 2 ∼ χ21 )
Example Let Xij , j = 1, 2, . . . , ni be the failure times of a random sample
of ni electronic component produced selected from the production line of the
ith manufacturer i = 1, 2, 3 and assume that they are independently and
exponentially distributed with means which may differ from manufacturer to
manufacturer. Assuming that n1 , n2 and n3 are large, construct an approximate test to test the null hypothesis that the mean times to failure are the
same for the three companies. If n1 = n2 = n3 = 20 and
X
X
X
X1j = 3106,
X2j = 5620
X3j = 3912
j
j
j
carry out the test and report your conclusions.
i
Solution: {Xij }nj=1
is a random sample from the exponential distribution
with parameter θi , (mean 1/θi ) i = 1, 2, 3 and we wish to test the hypothesis
N H : θ 1 = θ 2 = θ3
against
AH : θ1 6= θ2 6= θ3
Here θ = (θ1 , θ2 , θ3 ) and the likelihood of the observed values is
L (θ) =
n1
Y
j=1
=
−θx1j
θ1 e
n2
Y
j=1
−θx2j
θ2 e
n3
Y
j=1
θ1n1 e−θ1 t1 θ2n2 e−θ2 t2 θ3n3 e−θ3 t3
θ3 e−θx3j
90
CHAPTER 4. MAXIMUM LIKELIHOOD ESTIMATION
where t1 =
Pn1
j=1 x1 j ,
t2 =
Pn2
j=1 x2j
t3 =
Pn3
j=1
x3j . The log-likelihood is
ℓ (θ) = n1 log θ1 + n2 log θ2 + n3 log θ3 − θ1 t1 − θ2 t2 − θ3 t3
Hence
ni
ni
∂ℓ (θ)
=0⇔
− ti ⇔ θ̇i =
∂θi
θi
ti
i = 1, 2, 3
When N H is true i.e. when θ1 = θ2 = θ3 (= θ say) the likelihood function
reduces to
L (θ) = θn1 +n2 +n3 e−θ(t1 +t2 +t3 )
and
ℓ (θ) = log L (θ) = (n1 + n2 + n3 ) log θ − θ (t1 + t2 + t3 )
Hence
(n1 + n2 + n3 )
∂ℓ (θ)
=0⇔
− (t1 + t2 + t3 ) = 0
∂θ
θ
i.e.
θ̈ =
n1 + n2 + n3
= θ̈1 = θ̈2 = θ̈3
t1 + t2 + t3
Consequently
³ ´ µ n + n + n ¶n1 +n2 +n3
1
2
3
L θ̈ =
e−(n1 +n2 +n3 )
t1 + t2 + t3
and
µ ¶n2
µ ¶n3
µ ¶n1
³ ´
n2
n3
n1
−n1
−n2
e
e
e−n3
L θ̇ =
t1
t2
t3
nn1 1 nn2 2 nn3 3 e−(n1 +n2 +n3 )
=
tn1 1 tn2 2 tn3 3
Hence g.l.r.t. statistic is
³ ´
L θ̈
(n + n2 + n3 )n1 +n2 +n3
tn1 1 tn2 2 tn3 3
³ ´= 1
Λ (x) =
nn1 1 nn2 2 nn3 3
(t1 + t2 + t3 )n1 +n2 +n3
L θ̇
³ n ´n µ t ¶n1 µ t ¶n2 µ t ¶n3
2
3
1
=
t
n1
n2
n3
4.5. ASYMPTOTIC FORM OF THE G.L.R.T.
91
where n = n1 + n2 + n3 and t = t1 + t2 + t3 . The asymptotic form of the
g.l.r.t. statistic is
³ ´i
h ³ ´
−2 log Λ (x) = 2 ℓ θ̇ − ℓ θ̈
·
µ ¶
µ ¶
µ ¶
µ ¶¸
t
t1
t2
t3
= 2 n log
− n1 log
− n2 log
− n3 log
n
n1
n2
n3
2
∼ χ2 when the null hypothesis is true
(Note that N H is specified in terms of two conditions since
½
θ1 − θ 2 = 0
θ 1 = θ 2 = θ3 ⇔
θ1 − θ 3 = 0
distribution.
For the data given
¶
µ
¶
½
µ
3106
3106 + 5620 + 3912
− 20 log
−2 log Λ = 2 60 log
60
20
µ
¶
µ
¶¾
5620
3912
−20 log
− 20 log
20
20
= 3.6209
Since 3.6209 < χ22;0.05 = 5.99 there is no evidence, at the 5% level, to reject
the null hypothesis.
An important example: Observations fall independently in one of four
categories C1 , C2 , c3 and C4 with respective probabilities θ1 , θ2 , θ3 , θ4 with
θ1 + θ2 + θ3 + θ4 =1. the following hypothesis is put forward
3
1
N H : θ1 = β 2 , θ2 = β(1−β), θ3 = β(1−β), θ4 = (1−β)2 with β = 0.6
2
2
In a random sample of 100 such observations the numbers falling in the four
categories were
x1 = 10,
x2 = 13,
x3 = 37 and x4 = 40
Perform a test of approximate 5% level of significance to test this hypothesis
against the alternative that at least one of the equalities in N H does not
hold; show that there is evidence to reject N H.
92
CHAPTER 4. MAXIMUM LIKELIHOOD ESTIMATION
Solution: Note that N H really says
θ1 = 0.36,
θ2 = 0.12,
θ3 = 0.36,
θ4 = 0.16
Note also that since the θi ’s add up to 1, the last one is determined as soon
as we know the first three θi ’s; consequently N H involves only 3 independent
equations/conditions.
The likelihood of θ = (θ1 , θ2 , θ3 , θ4 )T for the result xi observations in the
ith category i = 1, 2, 3, 4 when n are sampled is given by the multinomial
probability
n!
θx1 θx2 θx3 θx4
x1 !x2 !x3 !x4 ! 1 2 3 4
∝ θ1x1 θ2x2 θ3x3 (1 − θ1 − θ2 − θ3 )x4
L(θ) =
where n = x1 + x2 + x3 + x4 and
ℓ(θ) = x1 log θ1 + x2 log θ2 + x3 log θ3 + x4 log(−θ1 − θ2 − θ3 ) + const]
Hence differentiating w.r.t. θi i = 1, 2, 3 and equating to zero to obtain the
m.l.e. we get
x4
∂ℓ(θ)
xi
−
=0⇔
∂θi
θ̇i
1 − θ̇1 − θ̇2 − θ̇3
i.e.
Since
xi ˙
θ˙i =
θ4
x4
P4
i=1 θ̇i
i = 1, 2, 3
i = 1, 2, 3, 4.
(4.21)
= 1 adding the four equations above gives
1=
x1 + x2 + x3 + x4
n
θ̇4 = θ̇4 ⇒
x4
x4
θ̇4 =
x4
n
Replacing this in (4.21) we get the m.l.e.
θ̇i =
xi
n
i = 1, 2, 3, 4
and hence the maximised log-likelihood function
ℓ(θ̇) =
4
X
i=1
xi log θ̇i + const =
4
X
i=1
xi log
³x ´
i
n
+ const
93
4.5. ASYMPTOTIC FORM OF THE G.L.R.T.
When N H is true H0 is an one point set, namely H0 = {θ = (0.36, 0.12, 0.36, 0.16)}.
Hence
θ̈ = (0.36, 0.12, 0.36, 0.16) = (π1 , π2 , π3 , π5 )
and the maximized log-likelihood under the N H is
ℓ(θ̈) =
4
X
xi log πi + const
i=1
Therefore
−2 log Λ(x) = 2[ℓ(θ̇) − ℓ(θ̈)] = 2
= 2
"
4
X
xi log
i=1
4
X
xi log
i=1
³x ´
µ
i
n
xi
nπi
−
¶
4
X
i=1
∼ χ23
xi log πi
#
when N H is true
Note that
nπi =
=
xi =
=
expected number of observations out of n to fall
in the ith category when N H is true
ei
observed number of observations out of n that fall
in the ith category
oi
Thus the test statistic has the form
−2 log Λ(x) = 2
4
X
i=1
µ ¶
oi
.
oi log
ei
For the given results
µ
¶
10
13
37
40
−2 log Λ(x) = 2 10 log
= 25.90
+ 13 log
+ 37 log
+ 40 log
36
12
36
16
> χ23,0.05 = 7.815
There is, therefore, evidence at the 5% level of significance against the null
hypothesis which is therefore rejected.
94
CHAPTER 4. MAXIMUM LIKELIHOOD ESTIMATION
(b) We now change the null hypothesis to
N H : θ1 = β 2 ,
3
θ3 = β(1 − β),
2
1
θ2 = β(1 − β),
2
θ4 = (1 − β)2
with β unspecified in (0,1).
Notice that this null hypothesis is specified in terms of only two independent equations/conditions.
As before the equation for theta4 is
√
redundant; further, since β = θ1 , the only independent equations are
θ2 =
p
1p
θ1 (1 − θ1 ),
2
θ3 =
p
3p
θ1 (1 − θ1 )
2
Thus when the null hypothesis N H is true the g.l.r.t. statistic −2 log Λ(x) ∼
χ22 .
Now as in part (a) when N H is not true
ℓ(θ̇) =
4
X
i=1
xi log
³x ´
i
n
+ const
When N H is true the likelihood is
µ
¶x2 µ
¶ x3
n!
1
3
2x1
L(θ) = L(β) =
β
β(1 − β)
β(1 − β)
(1 − β)2x4
x1 !x2 !x3 !x4 !
2
2
∝ β 2x1 +x2 +x3 (1 − β)x2 +x3 +2x4
= β N (1 − β)M
where N = 2x1 + x2 + x3 and M = x2 + x3 + 2x4 . Thus when N H is true
the log-likelihood is
ℓ(β) = N log β + M log(1 − β) + constant
and
N
M
∂ℓ(β)
=0⇔
−
=0
∂β
β
1−β
i.e. the m.l.e. β̈ of β satisfies
N (1 − β̈) − M β̈ = 0 ⇒ β̈ =
For the given data β̈ =
N
2n
=
70
200
= 0.35.
N
N
=
N +M
2n
95
4.5. ASYMPTOTIC FORM OF THE G.L.R.T.
The restricted m.l.e.’s of the θi ’s when N H is true are
1
θ¨1 = β̈ 2 = π1 (β̈), θ¨2 = β̈(1 − β̈) = π2 (β̈),
2
3
θ̈3 = β̈(1 − β̈) = π3 (β̈), θ¨4 = (1 − β̈)2 = π4 (β̈)
2
Thus
ℓ(θ̈) =
4
X
xi log θ̈i + const =
4
X
xi log πi (β̈) + const
i=1
i=1
so that
−2 log Λ(x) = 2[ℓ(θ̇) − ℓ(θ̈)] =
=
4
X
i=1
4
X
i=1
xi log
xi log
³x ´
i
n
µ
−
4
X
xi log π(β̈)
i=1
xi
nπi (β̈)
¶
Note again that
nπi (β̈) = estimated expected number falling in the ith category = ei
Thus the g.l.r.t. statistic still has the form
−2 log Λ(x) = 2
4
X
i=1
µ ¶
oi
oi log
ei
For the given data
e1 = 100 × 0.352 = 12.25,
1
e2 = 100 × 0.35 × 0.65 = 11.375,
2
3
e3 = 100 × 0.35 × 0.65 = 34.125,
2
e4 = 100 × 0.652 = 42.25
and
·
¸
10
13
37
40
−2 log Λ(x) = 2 10 log
+ 13 log
+ 37 log
+ 40 log
12.25
11.375
34.125
42.25
2
= 1.02 < χ2,0.05 = 5.991
96
CHAPTER 4. MAXIMUM LIKELIHOOD ESTIMATION
Thus there is no evidence at the 5% level of significance to reject the null
hypothesis.
This example can be generalized.
Result: The multinomial test
Suppose each observation can fallPin one of k categories C1 , C2 , . . . , Ck
k
with P r(Ci ) = θi , i = 1, 2, . . . , k,
observations
i=1 θi = 1, and that n P
k
are taken independently with xi falling in Ci , i = 1, 2, . . . , k,
i=1 xi = n.
Thus the sample joint p.m.f. is
fX (x|θ) =
n!
θxi θx2 · · · θkxk
x1 !x2 ! . . . xk ! 1
We formulate the null hypothesis that states that the θi ’s follow the model
N H : θi = πi (β)
i = 1, 2, . . . , k
with the πi ’s given functions involving an unknown s-dimensional parameter
β (if no β parameter is involved in the model then s = 0 and the πi′ s in N H
are given values). For large n, the asymptotic form of the g.l.r.t. statistic is
−2 log Λ(x) =
∼
k
X
xi log
i=1
χ2k−1−s
µ
xi
nπi (β̈)
¶
=
k
X
i=1
µ ¶
oi
oi log
ei
(if s = 0 then πi (β̈) = πi , the given numerical value in N H) Q
where β̈ is the
n!
Q
m.l.e. of β under the null hypothesis i.e β̈ maximises k x ! ki=1 [πi (β)]xi .
i=1
i