On the optimality of likelihood ratio test for local sensor decision

On the optimality of likelihood ratio test for local sensor decision
rules in the presence of non-ideal channels
Biao Chen†
and
Peter K. Willett
June 30, 2003
Abstract
Distributed detection has been intensively studied in the past. In this correspondence, we
consider the design of local decision rules in the presence of non-ideal transmission channels
between the sensors and the fusion center. Under the conditional independence assumption
among multiple sensor observations, we show that the optimal local decisions that minimize
the error probability at the fusion center amount to a likelihood ratio test given a particular
constraint on the fusion rule. This constraint turns out to be quite general and is easily satisfied
for most sensible fusion rules. A design example using a parallel sensor fusion structure with
binary symmetric channels between local sensors and the fusion center is given to illustrate
the usefulness of the result in obtaining optimal thresholds for local sensor observations. The
study that incorporates the transmission channel in sensor system design may have potential
applications in the emerging field of wireless sensor networks.
†
Corresponding author. Biao Chen is with Syracuse University, Department of EECS, 121 Link Hall, Syracuse,
NY 13244. Phone/Fax: (315)443-3332/2583. Email: [email protected]. Peter K. Willett is with the University of
Connecticut, Department of Electrical and Computer Engineering, 317 Fairfield Rd. U-1157, Storrs, CT 06269-1157.
Phone/Fax: (860)486-2195/2447. Email: [email protected].
1
1
Introduction
While study on distributed and decentralized decision making can be traced back to the early 1960
in the context of team decision problems (see, e.g., [1]), the effort significantly intensified since the
publication of [2]. In [2], Tenney and Sandell formulated the distributed detection problem using
a Bayesian setting and showed that, for a two-sensor case and under the conditional independence
assumption, the optimal local sensor decisions are likelihood ratio tests (LRT). This work was later
generalized to multiple sensors by Reibman and Nolte [3] and by Hoballah and Varshney [4]. It was
shown in [4] that under the Bayesian criterion and in the presence of a fusion center, the optimal
local sensor decision for a binary hypotheses testing problem is still a LRT. Similarly, under the
Neyman-Pearson (NP) criterion, the optimality of local LRT has been established in [5].
Notice that all of the above results require a conditional independence assumption among local
sensor observations. Without this assumption, obtaining optimal local decision rules is much more
complicated even in the simplest case [6]. Another implicit assumption of the above work is that
local decisions are accessible at the fusion center. While this assumption may be taken for granted
for applications with low rate transmission and relaxed cost/energy situations, it is clear that
such idealization would lend to an sub-optimal independent (divide and conquer) design for a
severely resource constrained sensor networks with stringent delay requirement. In [7, 8], a coherent
approach to fusion rule design that integrates transmission into processing was adopted in the
context of wireless sensor networks operating in a fading environment. The problem we want
to treat in the correspondence can be considered its dual problem: In the presence of non-ideal
transmission channels between local sensor output and the fusion center, how should one design
the local sensor decisions?
The design of local decision rules in the presence of possible channel errors has been addressed
under the NP criterion [9, 10]. There, optimality of the LRT was established under a simple binary
symmetric channel (BSC) model between each sensor and the fusion center. Separately in [11], a
multiple access channel between local sensors and the fusion center with discrete channel outputs
was assumed and optimal quantization points (in the person-by-person sense) was obtained on the
original observations through numerical procedure. In this correspondence, we assume a general
vector channel model from the local sensors to the fusion center and investigate the optimality of the
LRT for local sensor decisions. We restrict ourselves to binary local sensor outputs, denoted by Uk ,
and assume conditional independence among sensor observations. The Bayesian criterion is adopted
2
that minimizes the error probability at the fusion center. The main result of the correspondence is
summarized in the following theorem.
Theorem 1 Assume that the local observations, Xk ’s, are conditionally independent and that the
channels between sensors and the fusion center are characterized by
P (Y1 , · · · , YK |U1 , · · · , UK ) =
K
p(Yk |Uk )
(1)
k=1
Assuming further that the fusion rule and the k th local decision satisfy
P (u0 = 1|yk , uk = 1) − P (u0 = 1|yk , uk = 0) ≥ 0
(2)
P (u0 = 0|yk , uk = 0) − P (u0 = 1|yk , uk = 1) ≥ 0
(3)
where
yk = [y1 , · · · , yk−1 , yk+1 , · · · , yK ]
Then the optimal local decision rule for the k th sensor amounts to the following LRT
P (uk = 1|xk ) =

p(xk |H1 )


 1 if p(xk |H0 ) >






 0
if
p(xk |H1 )
p(xk |H0 )
<
π0
π1
π0
π1
y
k [P (u0 =1|y
yk
y
[P (u0
=0|yk ,u
k [P (u0 =1|y
yk
[P (u0
k ,u
k ,u
=0|yk ,u
k =1)−P (u0 =1|y
k =0)−P (u0
=0|yk ,u
k =1)−P (u0 =1|y
k =0)−P (u0
k ,u
k ,u
=0|yk ,u
k |H )dyk
0
k
k
k =1)]p(y |H1 )dy
k =0)]p(y
k |H )dyk
0
k
k
k =1)]p(y |H1 )dy
k =0)]p(y
As it turns out, the conditions in (2) and (3) is easily satisfied, as shown using an example in
Section 4. Notice that in addition to using the Bayesian criterion instead of the NP criterion, our
result applies to any channel model satisfying (1) hence can be considered more general than that
of [9, 10]. The organization of this correspondence is as follows. In the next section, we introduce
the problem formulation. In section 3, we prove Theorem 1. A simple design example is given in
section 4 and we conclude in section 5.
2
Problem Formulation
Consider a hypotheses testing problem with distributed sensors collecting observations that are
conditionally independent, i.e.,
p(X1 , · · · , XK |Hi ) =
K
k=1
3
p(Xk |Hi )
(4)
where i = 0, 1 and Xk ’s are local sensor observations, hence K is the total number of sensors. We
further assume that the priors on the two hypotheses are given by
π0 = P (H0 )
π1 = P (H1 ) = 1 − π0
Each local sensor makes a binary decision Uk based on its observation Xk :
Uk = γk (Xk ).
Each Uk is then sent to a channel characterized by p(Yk |Uk ) where Yk is observed at the fusion
center. Hence we have parallel independent channels that connect the sensors to the fusion center,
i.e.,
P (Y1 , · · · , YK |U1 , · · · , UK ) =
K
p(Yk |Uk )
(5)
k=1
The fusion center is assumed to implement the optimal fusion rule based on the channel output
vector Y = [Y1 , · · · , YK ]T :
U0 = γ0 (Y1 , · · · , YK ).
Thus a decision error happens if U0 differs from the true hypothesis. Prominent examples of the
channel p(Yk |Uk ) include the binary symmetric channel [10] and the fading channel model [7]. A
simple diagram illustrating the above fusion network is given in Fig. 1.
X1
Sensor 1
U1
p(Y1 |U1 )
Y1
Fusion
Center
..
.
H0 /H1
XK
Sensor K
UK
p(YK |UK )
U0
YK
Figure 1: A block diagram for a wireless sensor network tasked with binary hypothesis
testing in the presence of non-ideal transmission channels.
The goal of this correspondence is to derive the forms of optimal local decision rules that
minimize the total error probability for the fusion center. We use person-by-person optimization
(PBPO) which optimizes the decision rule γk (Xk ) for the k th local sensor given fixed decision
rules for all other sensors as well as the fusion rule. The result is therefore a necessary (and not
necessarily sufficient) condition for optimality.
4
3
Optimality of LRT For Local Sensor Decisions
Given the above formulation, we can rewrite the error probability using the hierarchical probability
structure specified in Fig. 1 as
Pe0
=
=
=
=
π0 P (u0 = 1|H0 ) + π1 P (u0 = 0|H1 )
π0 P (u0 = 1, y|H0 )dy + π1 P (u0 = 0, y|H1 )dy
y
y
π0 P (u0 = 1|y)p(y|H0 )dy + π1 P (u0 = 0|y)p(y|H1 )dy
y
y
P (y, u|H0 )dy + π1 P (u0 = 0|y)
P (y, u|H1 )dy
π0 P (u0 = 1|y)
y
=
(a)
=
π0
y
P (u0 = 1|y)
y
P (u0 = 1|y)
=
y
π0
y
P (u0 = 0|y)
=
y
π0
y
P (u0 = 0|y)
y
P (u0 = 0|y)
x
p(y|u)P (u|H1 )dy
u
p(x, u|H1 )dxdy
P (u|x)p(x|H0 )dxdy
p(y|u)
x
p(y|u)
p(x, u|H0 )dxdy
x
u
y
P (u0 = 0|y)
p(y|u)
x
p(y|u)
u
P (u0 = 1|y)
+π1
p(y|u)
u
(c)
u
P (u0 = 1|y)
+π1
u
p(y|u)P (u|H0 )dy + π1
u
+π1
(b)
u
π0
y
u
x
P (u|x)p(x|H1 )dxdy
P (uk |xk )p(xk |H0 )P (uk |xk )p(xk |H0 )dxdy
p(y|u)
u
x
P (uk |xk )p(xk |H1 )P (uk |xk )p(xk |H1 )dxdy
where we used the facts P (y|u, Hi ) = P (y|u) in (a), P (u|x, Hi ) = P (u|x) in (b), and the conditional
independence assumption (4) in (c). Define
uk = [u1 , · · · , uk−1 , uk+1 , · · · , uK ],
xk = [x1 , · · · , xk−1 , xk+1 , · · · , xK ],
un1 = [u1 , · · · , uk−1 , uk = 1, uk+1 , · · · , uK ],
un0 = [u1 , · · · , uk−1 , uk = 0, uk+1 , · · · , uK ]
we can further expand the error probability with respect to the decision rule for the nth sensor.
Continuing,
Pe0 = π0
xk u
k
P (uk |xk )p(xk |H0 )
y
P (u0 = 1|y)
uk
5
p(y|u)
xk
P (uk |xk )p(xk |H0 )dxk dydxk
+π1
xk u
k
=
xk u
k
P (uk |xk )p(xk |H1 )
P (u0 = 0|y)
y
y
P (u0 = 1|y)
y
uk
xk
+π1 p(xk |H1 )
P (u0 = 0|y)
y
+P (uk = 0|xk ) π0 p(xk |H0 )
+π1 p(xk |H1 )
y
P (u0 = 0|y)
xk
P (uk |xk )p(xk |H1 )dxk dydxk
p(y|u)P (uk |H0 )dy
p(y|u)P (uk |H1 )dy dxk
y
P (uk = 1|xk ) π0 p(xk |H0 )
=
p(y|u)
uk
uk
P (uk |xk ) π0 p(xk |H0 )
+π1 p(xk |H1 )
P (u0 = 0|y)
P (u0 = 1|y)
p(y|un1 )P (uk |H0 )dy
uk
p(y|un1 )P (uk |H1 )dy
uk
y
P (u0 = 1|y)
p(y|un0 )P (uk |H0 )dy
uk
p(y|un0 )P (uk |H1 )dy
dxk
uk
Using the fact that P (uk = 0|xk ) = 1 − P (uk = 1|xk ), we have
P (uk = 1|xk ) π0 p(xk |H0 ) P (u0 = 1|y)
Pe0 =
p(y|un1 )P (uk |H0 )dy
y
xk
+π1 p(xk |H1 )
y
P (u0 = 0|y)
P (u0 = 0|y)
y
=
xk
y
k
p(y|un0 )P (uk |H0 )dy
uk
P (u0 = 0|y)
y
p(y|un0 )P (uk |H0 )dy
p(y|u )P (u |H1 )dy
n0
uk
P (u0 = 0|y)
−P (uk = 1|xk )) π0 p(xk |H0 )
p(y|u )P (u |H1 )dy
n0
k
P (uk = 1|xk ) π0 p(xk |H0 )
+π1 p(xk |H1 )
uk
P (u0 = 1|y)
+π1 p(xk |H1 )
y
P (u0 = 1|y)
uk
+ π0 p(xk |H0 )
p(y|un1 )P (uk |H1 )dy
y
uk
−P (uk = 1|xk )) π0 p(xk |H0 )
+π1 p(xk |H1 )
uk
y
P (u0 = 1|y)
dxk
p(y|un1 )P (uk |H0 )dy
uk
p(y|u )P (u |H1 )dy
n1
k
uk
y
P (u0 = 1|y)
uk
6
p(y|un0 )P (uk |H0 )dy
+π1 p(xk |H1 )
=
xk
P (u0 = 0|y)
y
p(y|u )P (u |H1 )dy
n0
k
dxk + C
uk
P (uk = 1|xk )
π0 p(xk |H0 )
y
P (u0 = 1|y)
−π1 p(xk |H1 )
y
P (u0 = 0|y)
p(y|un1 )P (uk |H0 ) −
uk
p(y|un0 )P (uk |H0 ) dy
uk
p(y|u )P (u |H1 ) −
n0
k
uk
p(y|u )P (u |H1 ) dy
n1
k
uk
+C
(6)
where
C =
xk
π0 p(xk |H0 )
y
P (u0 = 1|y)
y
p(y|un0 )P (uk |H0 )dy
uk
+π1 p(xk |H1 )
P (u0 = 0|y)
p(y|u )P (u |H1 )dy dxk
n0
k
uk
Thus C is a constant with regard to Uk . We further denote by
n1
k
n0
k
A =
P (u0 = 1|y)
p(y|u )P (u |H0 ) −
p(y|u )P (u |H0 ) dy
y
B =
y
P (u0 = 0|y)
uk
uk
p(y|un0 )P (uk |H1 ) −
uk
p(y|un1 )P (uk |H1 ) dy
uk
To minimize Pe0 , one can see from (6) that the optimal decision rule for the nth sensor is

 0 π0 p(x |H0 )A > π1 p(x |H1 )B
k
k
P (uk = 1|xk ) =
 1 Otherwise
If further
A > 0
(7)
B > 0
(8)
then the local decision rule amounts to a likelihood ratio test as that in Theorem 1. Let us now
turn our attention to the conditions specified in (7) and (8). Equation (7) can be simplified as
following
A =
y
P (u0 = 1|y)
p(y|un1 )P (uk |H0 ) −
uk
uk
7
p(y|un0 )P (uk |H0 ) dy
=
yk
P (u0 = 1|yk , yk )
yk
p(yk , yk |uk , uk = 1)P (uk |H0 ) −
k
yk
P (u0 = 1|y)
yk
p(y |u )p(yk |uk = 1)P (u |H0 ) −
k
k
u
=
yk
yk
yk
yk
yk
p(y |u )p(yk |uk = 0)P (u |H0 ) dyk dyk
k
k
uk
P (u0 = 1|yk , yk , uk = 1)p(yk |uk = 1, yk )p(yk |H0 )dyk dyk
yk
yk
P (u0 = 1|yk , yk , uk = 0)p(yk |uk = 0, yk )p(yk |H0 )dyk dyk
P (u0 = 1, yk |yk , uk = 1)p(yk |H0 )dyk dyk
P (u0 = 1, yk |yk , uk = 0)p(yk |H0 )dyk dyk
yk
k
k
k
P (u0 = 1|y , uk = 1)p(y |H0 )dy −
P (u0 = 1|yk , uk = 0)p(yk |H0 )dyk
yk
yk
=
k
=
P (u0 = 1|yk , yk ) p(yk |uk = 1)p(yk |H0 ) − p(yk |uk = 0)p(yk |H0 ) dyk dyk
yk
−
yk
−
k
yk
=
k
=
p(yk , yk |uk , uk = 0)P (uk |H0 ) dyk dyk
uk
u
=
[P (u0 = 1|yk , uk = 1) − P (u0 = 1|yk , uk = 0)]p(yk |H0 )dyk
Clearly, A > 0 if
P (u0 = 1|yk , uk = 1) − P (u0 = 1|yk , uk = 0) ≥ 0
Similarly we can show that (8) is true if
P (u0 = 0|yk , uk = 0) − P (u0 = 0|yk , uk = 1) ≥ 0
Thus Theorem 1 is proved. Notice that the above two conditions are quite easily satisfied for any
reasonable local decision and fusion rule pair, to be seen from the next section.
4
Example
In this section, we use a two-sensor example to demonstrate how to obtain the optimal local thresholds. Consider the detection of known signal S in additive Gaussian noises that are independent
and identically distributed (i.i.d.) for the two sensors, i.e.,
Xk = S + N k
8
for k = 1, 2 with N1 and N2 being i.i.d. N (0, σ 2 ). Without loss of generality, we assume S = 1 and
σ 2 = 1. Each sensor makes a binary decision based on its observation Xk
Uk = γk (Xk ).
Uk is then transmitted through a BSC with identical crossover probability α for both sensors.
In order to use the developed result to find the optimal threshold, we first study the conditions
specified in (2) and (3). Using the Gaussian assumption and BSC channel model, we have
A = P (u0 = 1|y2 , u1 = 1) − P (u0 = 1|y2 , u1 = 0)
1
=
[P (u0 = 1, y1 |y2 , u1 = 1) − P (u0 = 1, y1 |y2 , u1 = 0)]
y1 =0
=
1
[P (u0 = 1|y1 , y2 , u1 = 1)P (y1 |y2 , u1 = 1) − P (u0 = 1|y1 , y2 , u1 = 0)P (y1 |y2 , u1 = 0)]
y1 =0
=
1
[P (u0 = 1|y1 , y2 )P (y1 |u1 = 1) − P (u0 = 1|y1 , y2 )P (y1 |u1 = 0)]
y1 =0
=
1
P (u0 = 1|y1 , y2 ) [P (y1 |u1 = 1) − P (y1 |u1 = 0)]
y1 =0
= P (u0 = 1|y1 = 0, y2 ) [P (y1 = 0|u1 = 1) − P (y1 = 0|u1 = 0)]
+P (u0 = 1|y1 = 1, y2 ) [P (y1 = 1|u1 = 1) − P (y1 = 1|u1 = 0)]
= P (u0 = 1|y1 = 0, y2 ) [α − (1 − α)] + P (u0 = 1|y1 = 1, y2 ) [(1 − α) − α]
= [P (u0 = 1|y1 = 1, y2 ) − P (u0 = 1|y1 = 0, y2 )] (1 − 2α)
Thus if 1) α < 0.5 and 2) P (u0 = 1|y1 = 1, y2 ) > P (u0 = 1|y1 = 0, y2 ), then A > 0. Notice
condition 2) amounts to using monotone fusion rule [12]. On the other hand, if α > 0.5, one can
easily show that the optimal fusion rule should be ‘reverse’ monotone and we still have A > 0. For
condition (3), similar argument can be made. Thus for this particular example, the optimal local
decision rule is always a LRT no matter what the parameters are.
Next we demonstrate how to obtain the optimal local thresholds for the LRT. Using sensor 1
as an illustration, we have
where
τ1 =
π0
π1

 1
P (u1 = 1|x1 ) =
 0
y2
if
if
p(x1 |H1 )
p(x1 |H0 )
p(x1 |H1 )
p(x1 |H0 )
> τ1
< τ1
[P (u0 = 1|y2 , u1 = 1) − P (u0 = 1|y2 , u1 = 0)]p(y2 |H0 )
y2 [P (u0
= 0|y2 , u1 = 0) − P (u0 = 0|y2 , u1 = 1)]p(y2 |H1 )
9
To proceed, we first find the optimal fusion rule that minimizes error probability given fixed local
decision rules. This is clearly an LR test with threshold π0 /π1 . The likelihood function can be
computed in a straightforward manner
P (Y1 , Y2 |Hi ) =
P (Y1 , Y2 , U1 , U2 |Hi )
U1 ,U2
=
P (Y1 , Y2 |U1 , U2 , H1 )P (U1 , U2 |Hi )
U1 ,U2

= 

P (Y1 |U1 )P (U1 |H1 ) 
U1

P (Y2 |U2 )P (U2 |Hi )
U2
and can be evaluated directly using the BSC parameter α and the local thresholds.
The threshold for the LRT for sensor 1 becomes
π0 y2 [P (u0 = 1|y2 , u1 = 1) − P (u0 = 1|y2 , u1 = 0)]P (y2 |H0 )
τ1 =
π1 y2 [P (u0 = 0|y2 , u1 = 0) − P (u0 = 0|y2 , u1 = 1)]p(y2 |H1 )
π0 y2 y1 [P (u0 = 1|y2 , y1 )P (y1 |u1 = 1) − P (u0 = 1|y2 , y1 )P (y1 |u1 = 0)]P (y2 |H0 )
=
(9)
π1 y2 y1 [P (u0 = 0|y2 , y1 )P (y1 |u1 = 0) − P (u0 = 0|y2 , y1 )P (y1 |u1 = 1)]p(y2 |H1 )
where P (u0 |y1 , y2 ) = 1 or 0 according to the obtained optimal fusion rule and
p(y2 |Hi ) =
P (Y2 |U2 )P (U2 |Hi )
U2
which can be calculated using the threshold for sensor 2. Thus, as expected, the optimal threshold
at sensor 1 is coupled with that at sensor 2. The iterative procedure to obtain local optimal
thresholds as well as the fusion rule can be summarized as the following.
1. Initialize τ1 and τ2 .
2. Obtain the optimal fusion rule for fixed τ1 and τ2 .
3. For fixed fusion rule and τ2 , calculate τ1 using (9).
4. Similarly for fixed fusion rule and τ1 , calculate τ2 .
5. Check convergence, i.e., if the obtained τ1 and τ2 are identical (up to a prescribed precision)
to that from the previous iteration. If yes, stop. Otherwise, go to 2.
Notice that for the Gaussian problem, the LRT thresholds can be directly translated into thresholds
for the original observations. Notice further that the obtained thresholds satisfy only the necessary
condition for optimality thus multiple initializations are needed and the obtained thresholds are
10
compared with each other to find the optimal one. Below are three different parameter setting and
the respective results.
• π0 = 0.8 and α = 0.1. For this example, the iteration always converges to the same point
τ1 = τ2 = 1.0808 which implies that this may be the global optimal point. This is confirmed in
Fig. 2 where the analytically calculated minimum achievable error probabilities for different
τ1 and τ2 are plotted which shows a unique minimum point at (1.0808, 1.0808) with error
probability Pe0 = 0.1909. Further, the error probability is capped at 0.2 for all (τ1 , τ2 ) which
is sensible since π0 = 0.8, i.e., one should do no worse than to ignore the local sensor decision
and decide H0 .
• π0 = 0.5 and α = 0.1.
This is the equal prior case. For this example, the iteration converges to two different points
depending on the initialization: τ1 = τ2 = 0.9538 and τ1 = τ2 = 0.0462. Fig. 3 is the
minimum achievable error probability plot for different τ1 and τ2 . It turns out that both
points achieve identical error probability performance at Pe0 = 0.3259, hence both are global
minimum.
• π0 = 0.6 and α = 0.1.
Again, depending on initialization, two local minimum points are found: τ1 = τ2 = 1.2485 and
τ1 = τ2 = 0.3363. In this case, the error probabilities achieved by these two local minimums
are 0.3351 and 0.2990, respectively, indicating the second one achieves global minimum. An
error probability plot is given in Fig. 4 that is consistent with the results obtained through
iterative algorithm.
Not surprisingly, all the local minimum points are symmetric, i.e., τ1 = τ2 . Notice that while
non-identical optimal local thresholds are possible (see, e.g., [13]), it usually happens only for
discrete local sensor observations with carefully selected probability mass functions.
5
Conclusions
For sensor fusion network, the incorporation of transmission channels in the system design may
prove useful in resource constrained applications, such as the emerging field of wireless sensor
networks. In this correspondence, we establish the optimality of LRT for local sensor decisions for
11
a binary hypothesis testing problem under the Bayesian criterion. A design procedure using the
obtained results is then applied to a distributed detection example with known signal and Gaussian
noises at local sensors and binary symmetric channels between sensors and the fusion center.
References
[1] R. Radner, “Team decision problems,” Annals of Mathematical Statistics, vol. 33, pp. 857–881,
1962.
[2] R.R. Tenney and N.R. Sandell Jr.,
“Detection with distributed sensors,”
IEEE Trans.
Aerospace Elec. Syst., vol. 17, pp. 501–510, July 1981.
[3] A.R. Reibman and L.W. Nolte, “Optimal detection and performance of distributed sensor
systems,” IEEE Trans. Aerospace Elect. Sys., vol. AES-23, pp. 24–30, Jan. 1987.
[4] I.Y. Hoballah and P.K. Varshney, “Distributed Bayesian signal detection,” IEEE Trans.
Inform. Theory, vol. 35, pp. 995–1000, Sept. 1989.
[5] S.C.A. Thomopoulos, R. Viswanathan, and D.K. Bougoulias, “Optimal distributed decision
fusion,” IEEE Trans Aerospace Elec. Syst., vol. 25, pp. 761–765, Sep. 1989.
[6] P.K. Willett, P.F. Swaszek, and R.S. Blum, “The good, bad, and ugly: distributed detection
of a known signal in dependent Gaussian noise,” IEEE Trans. Signal Processing, vol. 48, pp.
3266–3279, Dec. 2000.
[7] B. Chen, R. Jiang, T. Kasetkasem, and P.K. Varshney, “Fusion of decisions transmitted over
fading channels in wireless sensor networks,” in Proc. of the 36th Asilomar Conference on
Signals, Systems, and Computers, Pacific Grove, CA, Nov. 2002, pp. 1184–1188.
[8] R. Niu, B. Chen, and P.K. Varshney, “Decision fusion rules in wireless sensor networks using
fading statistics,” in Proc. 37th Annual Conference on Information Sciences and Systems,
Baltimore, MD, Mar. 2003.
[9] S.C.A. Thomopoulos and L. Zhang, “Networking delay and channel errors in distributed
decision fusion,” in Abstracts of Papers, the IEEE International Symposium on Information
Theory, Kobe, Japan, June 1988, p. 196.
12
[10] S.C.A. Thomopoulos and L. Zhang, “Distributed decision fusion with networking delays and
channel errors,” Information Science, vol. 66, pp. 91–118, Dec. 1992.
[11] T.M. Duman and M. Salehi, “Decentralized detection over multiple-access channels,” IEEE
Trans. Aeros. Elec. Systems, vol. 34, pp. 469–476, 1998.
[12] P.K. Varshney, Distributed Detection and Data Fusion, Springer, New York, 1997.
[13] J.N. Tsitsiklis, “On threshold rules in decentralized detection,” in Proc. 25th IEEE Conf. on
Decision and Control, Athens, Greece, 1986, vol. 1, pp. 232–236.
Figure 2: Error probability plot for π0 = 0.8 and α = 0.1.
13
Figure 3: Error probability plot for π0 = 0.5 and α = 0.1.
Figure 4: Error probability plot for π0 = 0.6 and α = 0.1.
14