Homework #4 Prob. 2.5.1 (1) where is known and . Assume that we

Homework #4
Prob. 2.5.1
H 0 : p r R  
H 1 : p r R  
 R2
exp  
2
2
 2 0



 R2
exp  
2
2
 2 1



1
0
1
1
(1)
where  0 is known and  1   0 . Assume that we require PF  10 2
1. Construct the upper bound on the power function by assuming a perfect
measurement scheme coupled with a likelihood ratio test.
LRT is given by
p r H1 R H 1 
 R  
p r H 0 R H 0 
 R2 

exp  
2 12 
 1 2


 R2 
1

exp  
2 
 0 2
 2 0 
1
.
(2)
H1
 R2

R2  
 
 0 exp  2 
2 
1
2

2

1  
 0
H0
Taking log both sides of (2), we yield

ln  0
1
H1
 R2
R2 
 

ln  ,
2
2
 2 0 2 1 
H0
 2  2
R 2  1 2 20
 2 0  1
H1


 ln   ln  0

1

H0

 ,

(3)
H1
 2 02 12   1 
 ,
R
ln 
  12   02   0 
2
(4)
H0
since  1   0 . Eq. (4) can be modified further and is yielded as

Z R
H1


H0
 .
(5)
Hence, probability of false alarm is given by
PF  Prz    H 0   PrR    H 0   Pr R    H 0 ,
 PrR    H 0   PrR    H 0  ,
Since r is symmetry around zero, we have

 R2 
1
dR ,
PF  2 Pr R    H 0   2 
exp  
2 
2


2

0
0 





2

0
 S2
exp  
2
 2
1
(6)

dS

  
 2erfc*   .
0 
(7)
Hence, the threshold is written as
 PF 
,
 2 
is the inverse function of erfc*.
    0 erfc* 1 
where erfc*
1
With the similar procedure, we obtain that
  
PD  2erfc*   .
 1 
(8)
(9)
2. Is it UMP?
Since we can proceed (3) into the form of (4) without knowing the exact value of  1 ,
the test given in (4) is the UMP test.
3. Assuming the result in 2 is negative. Construct the generalized LRT.
Since the only unknown parameter is  1 , we only need to estimate  12 . From the
previous homework, the ML estimate of  12 is given by
ˆ 12 ML  R 2
Hence, the generalized LRT is written as
 g R  
 
max
p r H1 R H 1 ,  12
2
1

p r H 0 R H 0 
 R2 

exp  
2 R 2 
2R 2


 R2 
1

exp  
2 
 0 2
 2 0 
1
.
(10)
H1
 R2


exp  2  0.5  
R
 2 0

0
H0
By taking log both sides of (10), we obtain

 g R   ln  0
 R
H1
 R2


,
 2 2  0.5  ln 
0

H
0
H1
 
R2




ln
R
ln 
   0
2 02
H0

  0.5   .

(ANS)
Prob 2.6.1
The M-hypothesis problem.

pr H i R H i   2 
N /2
K i  1 / 2 
1
 1

T
exp  R  m i  Q i R  m i 
 2

First we need to find pr R  which is given by
pr R  
M 1
 P p R H 
i
i 0
(1)
i
r Hi
From problem 2.3.2, we will say Hi is true if
M 1
 i   Cij PrH j R 
(2)
j 0
is smallest. Use the Bayes’ rule, we have
M 1
pr H i R H j Pj
 i   Cij
.
(3)
pr R 
j 0
We observe that pr R  is independent of a choice of Hi. As a result, it can be
neglected. Hence, the test becomes:


Say Hi is true if
M 1


 i   Cij pr H R H j Pj ,
j 0
M 1
i

  Cij 2 
j 0
N /2
K 
j

1 / 2 1
 
 1

T
exp  R  m j  Q j R  m j  Pj (4)
 2

is the smallest.
2. If costs of choosing correct hypothesis are zero while costs of choosing incorrect
hypotheses are equal, construct a new test.
We know that the equivalent test of the test in (4) is written as
Compute
Li R   PrH i R  ,
and choose the largest
Using the Bayes’ rule, we have
Li R  


pr H i R H j Pj
pr R 
.
(6)
(5)
Taking log the above equation we have
ln Li R   ln pr H i R H j  ln Pj   ln  pr R  ,




(6)


1 / 2 1
 1

N /2
T
 ln  2  K i 
exp  R  m i  Q i R  m i    ln Pj   ln  pr R  ,
 2


1
1
N
T
  R  m i  Q i R  m i   ln  K i    ln 2   ln Pj   ln  pr R  . (7)
2
2
2
We observe that the third and last term in (7) is independent of a choice of hypothesis.
As a result, the equivalent test is given by
Compute
li R   
and choose the largest.
1
R  m i T Q i R  m i   1 ln  K i    ln Pj 
2
2
(8)
(QED)
Prob. 2.6.2
Let K i    n2 I  and hypothesis are equally likely
The test becomes:
Compute
 
1

R  m i T R  m i   N ln  n2 ,
li R   
2
2
2 n

T
li R   R  m i  R  m i 
and choose the largest, or
(1)
Compute
liR   R  m i  R  m i    R j  mi , j 
T
N
2
(2)
j 1
and choose the smallest
Dimensionality of the decision space is equal to the number of hypothesis. For
example, binary hypothesis with mean vector m0 and m1, respectively.
We have the decision space as
l0R   R  m 0  R  m 0    R j  m0, j 
(3)
l1R   R  m1  R  m1    Ri  m1, j 
(4)
T
N
2
j 1
and
T
N
j 1
l1R 
H1
H0
l0R 
2
2.
From (3) and (4), we observe that we say Hi is true if the distance from the observed
vector R to the mean vector corresponding to hypothesis Hi is minimum. In the other
words, it is a minimum distance decision rule.
(ANS)
Prob. 2.6.4
qB  xT Bx
(1)
1.
From linear algebra and lecture note, we know that matrix B can be written as
(2)
B  MT  B M
where rows of M contain the eigen vectors of B and diagonal elements of  B  are
the corresponding eigen values of B. Hence, we can write (1) again as
qB  xT MT  B Mx ,
 Mx T  B Mx .
(3)
Next, we define
(4)
z  Mx .
Since x~N(0,I) is multivariate Gaussian random vectors, it follows that z is
multivariate Gaussian with the mean vector and covariance matrix given by
Ez  MEx  0 ,
and
covz   E zz T ,
 
 E M xM x T ,
 E MxxT MT ,
 MExxT MT ,
 MI MT ,
 M MT  I ,
(5)
since M1  MT . Hence z~N(0,I)
We know that


jv x T Bx 
M qB  jv  E e jvq B  E 
e
.


Using the result in (5), we have
 jv z T  B z 
jv x T Bx 
M q B  jv  E 
e
  E e
,




  N

 E exp  jv  z i2 Bi  ,
  i 1

N

 E   exp jvzi2 Bi  .
(6)
i 1

Since z~N(0,I) implies that zi and zj are statistically independent as long as i  j , eq.
(6) becomes


N
 
M q B  jv   E exp jvzi2 Bi
i 1

(7)
Let yi  zi Bi . We have y ~ N 0,  B  . Hence, Eq. (7) reduces to
  
N
M q B  jv   E exp jvyi2 ,
i 1
N
(8)
  1  2 jvBi 1 / 2 ,
i 1
(QED)
2. If all the eigen values are equal, Eq. (8) becomes
M q B  jv  1  2 jvB  N / 2
(9)
From the lecture note, we know that
1  2 jvB  N / 2 
 1 

q N / 2 1 
 2B 
 N / 2 
or qB is chi-square distributed with N degree of freedoms.
N /2
e  q / 2 B I q  0 ,
(10)
3.
Here, we have


N /2
M q B  jv   1  2 jvB, 1 ,
2i
i 1
N /2
Ai
,
i 1 1  2 jvB , 2i 
 
(11)
where
Ai 
N /2

1

j 1, j  i 1  B , 2 j
Hence, the marginal PDF of q is written as
 N / 2  Ai
pq Q     
 i 1  2B,2i


  q / 2 B , 2 i
e



I
 q0

B , 2i 
.
(12)