• I
r-------------»·-=~--~-~-"'"~~<>-~•-~=-~-~--·>v•••~----~~»•·~·~·-~<>••=•"•'-•=·~·····~M•••<·'·"~-
1
CALIFORNIA STATE UNIVERSITY, NORTHRIDGE
SEQUENTIAL DETECTION OF SIGNALS IN NOISE
)\
USING WALD AND BERNOULLI TRIAL METHODS
A project submitted in partial satisfaction of the
requirements for the degree of Master of Science in
.I[
I
Electrical Engineering
I
by
I
Nizar Hammad
-------
July, 1976
• I
The project of Nizar Hammad is approved:
California State University, Northridge
ii
• I
I.
Historical Background and Introduction ••
~,
1
II.
Theory of Sequential Detection ••••••••••••
3
A.
B.
3
6
7
c.
Theory . •.•.•.....•.•.......•...•.•.••.
The Operating Characteristic Function.
Application •••••••••••••••••••••••••••
(1)
(2)
D.
The Average Sample Number Function •••• 13
(1)
(2)
E.
To obtain the OC function •••••••• 8
Graphical illustration of the
test procedure •••••••••••••• 11
Theory ••••••••••••••••••••••••••• 13
Maximum number of samples •••••••• 16
Hardware implementation ••••••••••••••• 19
III. Bernoulli Sequential Detection •••••••••••• 21
A.
B.
C.
D.
E.
IV.
Formulation of the problem ••••••••••••
The Information Gain Function •••••••••
Bernoulli detection characteristics •••
The OC Function •••••••••••••••••••••••
(a) The ASN Function •••••••••••••••••
(b) Maximum number of observations •••
21
22
22
25
26
28
Discussion and Conclusion ••••••••••••••••• 30
References •••••••••••••••••• ~ •••••••••••••••••• 31
iii
);~«_,._..,
I
!
... ~,=...,-•"-'-"''-"''~
.,.,..,..___....,..,.. >-<-·'"'-""""-"""'>."-~-<d~-'•'.<'....•-"'"'-'~
.. ""
"-""'"""""""-"""""'~'-"='-'<C<'-
~<;C'.... '>f~-"'"'""'-'~-""""'"<.f•<"-""'~<>·~~,,~....
,_.,,._,.,. .. ,,r.#-"<..<" .. '-'~"'
·-•--<-•-"-L""'"-~!'-9<.v<,>
r...-0... ''"'-"""'0-.,.-.e>><;b~--.-.;>
Ii
Figure 1 • • • • • • • • • • • • • • • • • • • • • • • • • • 10
Table 1 • • • • • • • • • • • • • • • • • • • • • • • • • • • 10
Figure 2 • • • • • • • • • • • • • • • • • • • • • • • • • • 12
Table 2 • • • • • • • • • • • • • • • • • • • • • • • • • • • 17
Figure 3 • • • • • • • • • • • • • • • • • • • • • • • • • • 18
Figure 4 • • • • • • • • • • • • • • • • • • • • • • • • • • 20
Table 3 • • • • • • • • • • • • • • • • • • • • • • • • • • • 23
Figure 5 • • • • • • • • • • • • • • • • • • • • • • • • • • 24
Table 4 • • • • • • • • • • • • • • • • • • • • • • • • • • • 27
Figure 6 • • • • • • • • • • • • • • • • • • • • • • • • • • 27
Figure 7 • • • • • • • • • • • • • • • • • • • • • • • • • • 29
Table 5 • • • • • • • • • • • • • • • • • • • • • • • • • • • 29
iv
....,.""'"=·
~.,,...,._,..,,.,...,...,."'"'""'""'"'"""' .......
List of Figures and Tables
>.~>-<=",..1;~.,.,_,...,._,..,.;
•
I
.
r~~--~-~---~~------~--=-M·--··--·---~---~- -~-·--- ,.·-··~~·~"·"·!
l
I
l
!
ABSTRACT
SEQUENTIAL DETECTION OF SIGNAL IN NOISE
I
USING WALD AND BERNOULLI TRIAL METHODS
by
Nizar Hammad
Master of Science in Electrical Engineering
July, 1976
The case of a gaussian signal embedded in gaussian
· noise is considered using Wald and Bernoulli sequential
testing.
A hardware implementation for Wald sequential
test is considered and the Information Gain function to
reach the optimal slicing threshold for Bernoulli case is
investigated.
Two important functions, the Operating
Characteristic (OC) function and the Average Sample Number
(ASN) function are also considered for both cases.
Results of the two approaches show that the OC
function is similar for both tests.
The ASN function for
Wald sequential testing requires 143 samples whereas, the
ASN function for Bernoulli trials requires 145 samples.
Both tests were performed on signal and noise having zeromean and equal variance.
The two approaches appear to be
equally good for the type of signals tested.
v
• I
I
I.
Historical Background and Introduction
[1] in 1947 put down the basic characteristics
I of theWald
theory of sequential hypothesis testing. The
J
I original
i
proof of the optimality of the sequential proba-
bility ratio test was done by Wald and Wolfowitz [2].
The
/ earliest applications of sequential detection techniques
II to
electronic problems were due to Reed, Bussgang and
! Middleton
! that
[3] in the early 50's.
It appears, however,
since then, little work has been done in this field.
This paper considers the specific problem in
sequential detection of gaussian signals in gaussian noise
using Wald testing and Bernoulli testing.
The type of
test assumed here is only a test of a simple hypothesis
against a single alternative.
A brief introduction to the Wald theory of sequential
detection is presented, and the details of two important
functions, the Operating Characteristic function and the
Average Sample Number function, are investigated.
An
application of Wald's sequential theory to the case of
gaussian signal in gaussian noise is presented and the
Operating Characteristic function and maximum number of
observations are obtained.
A graphical representation of
the test procedure is given as well as a hardware implementation of Wald sequential filter.
The second section of this paper begins by intro-
1
2
• I
variable into a Bernoulli random variable.
The theory of
Bernoulli sequential detection is then considered.
An
application of Bernoulli testing to the same guassian
signal in gaussian noise is
presented~
The Operating
Characteristic function and the Average Sample Number
function is also investigated.
The conclusion compares the results obtained using
Wald and Bernoulli sequential testing methods and considers possible advantages and disadvantages of these
two approaches.
• I
;·--~-·-=~----·---·--------.----~-,-·--·-----·-~·-·-···--~~"''"'''~-"-· -~-~-·----··-·=·-~----·
j
II.
Wald Theory of Sequential Detection
. . ~. ·-··--···-·~·~"·~~-,
·
i
I! A.
Theory.
The Wald sequential method [1] of testing a
l
I single
l thesis
1
J
I
1
null hypothesis against a single alternative hypois considered here.
Consider Ho to be the
statistical null hypothesis that noise only is present
!
j and H1 the alternative·hypothesis that noise plus the
l
1
signal are present.
Define
~
(type I error probability) to be the proba~
bility that H0 is true but H1 is accepted, and
l
1
(type II
error probability) to be the probability that H1 is true
I
l but H0 is accepted.
I refers
to
~
Radar detection terminology usually
as the false alarm rate and to
~
as the
probability of a missed target.
Consider further that the functional form of the
noise and signal plus noise are known except for para-
9,
meter
input.
which is related to the SNR at the detector
6 based on the
A more specific definition of
r.m.s value of the signal and noise is given in a later
section.
It suffices for the present to let 9 = 9.,
H0 is accepted and
9
when
= g, when H1 is accepted where
e. ., 9.
·
Consider
A to be the likelihood ratio corresponding
to the decision rule which minimizes
A=
(b ,
9, ) ~A
p (xl, x2, • • .xk, • •• xn(
I
p (xl, x2' • • .xk, • •• xn fJo
3
)
i.e.,
(1)
4
• I
.
r·~•o·•-•--><<--«<>~'"-'"'---·~~=~·~~-~~-• ••-~>->>~•~>>~•--·•--•·-~---~-~-·~-=-M•~"'~---o~.<O·~~'"'''"~'~·~~~--.~.·~-'''~''•''
""',
!where A is a lower threshold,
1
i
jp(xl, X2, ••• xk,···Xn
I e.)
is the joint probability density
i
!function (p.d.f.) of the sample space of size non the
l
I condition
l joint
I
i
9
=
e. '
I
and p(xl' X2' ••• xm, ••• xn So) is the
p.d.f. on the condition 9 = 9o.
Consider also the decision rule which minimizes
'l
II be
,,A_<
given by
B
0(
to
(2)
where B is an upper threshold.
·
I
The samples, Xi, i = l, 2, ••• ,n, are assumed to be
I statistically
independent and by taking advantage of the
lmonotonic property of the log function, Eq. (1) can be
!rewritten in the form
I
II
z(n)
=
lnA
"" ln
= .~
'
\,:'
x_i_l_e_,_)~q ln
p(xi 1 e.)
_P_(
where ln is the natural log, i.e., ln e
A,
( 3)
= 1 with
e = 2.71.
The sequential process proceeds as follows:
A test
is performed at the kth stage of the experiment and a
i
1
decision is made, based on the k samples, either to
; accept H0 or H1 or to ask for more data.
If H0 or H1 is accepted, the test terminates.
Other-
wise the experimental evidence is considered insufficient
and the xk+l sample is taken and the test is performed
on k + 1 samples.
This procedure continues until a deci-
sion to accept H0 or H1 is reached.
o(
It is assumed that
and{!>, as well as thresholds A and B, are specified.
!I;:~. --~~ ?.~I' . ~£....~:::~I:~-~~--:r::~~~-~-~-~EX ..~.?_....::~~?-~---~---~.E3.:r'IT1.~.na t ing_ .... ··- ···-
.
5
/
!decision is a random variable.
l
!
This investigation is
!restricted to cases resulting in a finite n, i.e., the
l
I
probability is unity that the test terminates as·
i
j
j n~ cO
..
I
The decision regions for accepting H0 or H1 are fully
Ij specified for a given« and~ as the following development
! will
'
!
show.
Equation (1) can be rewritten in the following form,
(4)
'
I
i = 1, 2, ••• , k, ••• , n,
where integrations are carried out on set
Ielements
!
¥,
whose
consist of all samples Xi which lead to the
acceptance of H1•
Note however, that the left hand side of
Eq. (4) is by definition equal to (1-(?>), and the right
hand side defines
« .
Hence, Eq. (4) reduces to the
following
(5)
The same procedure can also be used to show
~~B(l-o<)
•
(6)
Equation (5) can be solved for A and this result
substituted into Eq. (3) to yield
z (n)
~ ln('~~)
•
(7)
A similar form of Eq. (7) based on Eq. (3) can be used in
conjunction with Eq. (6) to give
z(n)
~ ln(L).
1-cr.
~
(8)
The decision rule based on k=n samples then becomes
6
.
/
ln ~
(9)
1-0(
•
The test repeats, based on k+l samples.
i B.
The Operating Characteristic Function (OC function)
I
The OC function 1(6) can be used to measure the
i
! performance of a sequential
!I than eo and e, .
test for values of 8
other
l
I
Il H
0
!i be
I
Let function 1( 9 ) be the probability of accepting
when 8 is the true parameter.
the probability of accepting H1.
! function
1
eo
and
'
[A]h
[J\]h
e)*
J[n~
=
co
where
h(
=
Consider an arbitrary
where -co<. .& < oo , with
h=h( 9 )
l
I
Therefore, 1-1( 6 ) must
'~·
[~.
:e(xil
p(xil
(10)
0
r
e.)
e,)
p(xile) dxi = 1,
:eCxile,)
p(Xi I eo)
•
(11)
(12)
! The symbol -- denotes the mean or expected value.
i
The ratio tests can then be stated as follows:
H1 if [AJh(9)
:?
Bh( 9 )
Choose
and choose H0 if [1\]h( e) "' Ah(e).
The lower bound of Eq. (5) and upper bound of Eq. (6)
can be utilized to give
Ah( 9)
=
; and
~
1-o(
(13)
,_ (!.
=
II(.
(14)
7
• I
A solution for rJ.. in Eqs. (13) and (14) then leads to the
required OC function, i.e.,
L (.
L(
e)
e)=
«
1-
=
Ah(
e)-
1
(15 )_
•
cannot be calculated from Eq. (15) until a functional
. form for h( 9 ) is known.
The required relationship is
obtained from Eq. (11).
Recall from the earlier definition of 1( 9 ) that
L( 9o) = 1-
and
L(
e.)
=
(16)
C)(
~
(17)
•
Also, from Eq. (11), the threshold values h(9o)
= 1,
and
h( e,) = -1 can be seen.
Different values of h(9) must be considered in order
to plot L(G ).
The following example will serve to
illustrate this procedure.
: C.
Application
An application of this procedure to a zero mean
\gaussian signal in zero-mean gaussian is considered.
0", 2 be the variance of the signal plus noise and
the variance of the noise only.
Let
t:rf
The p.d.f. of signal +
noise is therefore given by
J
1
-x2
exp (
t. Q"l\.
/1.n t;T,
and the p.d.f. of noise only is given by
p(1t\~) = -
p( ~ Ia.) =
1
l'l.TI
'Jo
exp ( -x2 )
'2. cr.,.
(19)
•
•
I
:S<"'"'~-,...._..,......,...,
......... ~"''""""'"·"'·......_".,.,..,'<=,,.,"·="-..,..,..,.,....__.._,_,_..,.,...,..,...~J~.,,.,<>r.fi.,.._,.,-"'~~---="''"'·-"'"..-o~.,..-"""'... "~"'---."~.,..,..,.,.,_""'"-'"'"~........._,.... ~~-.,.,.......,...,,_..~.....~__..,._""'.'·'"':"-;'.,..-...,_....,.,....,.~~~ . "-"-"">'''"¥-"'·"'"•'-''·H~t~-"'"·a~-,,""·"""-"~"""'~-~''_.~.,..--.,-..=,,
Ij
Let 6
eo=
l
l so
be the rms voltage of the true parameter
= U'
and 9, = ~. •
Q"o
Also, assume in this case that
l
lthe variance of the signal only and the noise only are both
I1 equal
'L
~
..1.
In"'
to unity, i.e., cr.= tt.s. = a:= 1, a-;= (0""'+ 0$)2 = r'L
I
land the likelihood ratio can be expressed as
!
...
'L
-·~'X:
....
exp ( U," '"~' " )
( <:r, )"' exp (_J._
~~)
'!.
cs."
;:.
,
j
l The variable z(n) is then given by
'I
I
z ( n ) = n ln §..o +
<r,
:.d
I\
( <:fo )
=
(20)
i
I
(21)
I
!The observable or test statistic upon which the decision
j is made is
)
~
xi2 and is the result of a squaring
i operation (e.g., square law detector).
l
!
1
(1)
To obtain the OC function
The OC function can be written in the form [1]
L( cr )
( ·-: )J,
=
(
I-
(22)
- 1
(J )" -
(
f
~ot ) ~
where h = h ( <r ) is t~e root of the equation
00
or
[_<S_o_e_x_p_(-_x_1.1_2.<r_~_)]
J
1.
_l_ exp ( ~) dx-1=0
I 2.T\ <.r
e."P (- ~'"it.~;)
cr,
( 23)
cz.q "
-oO
1
j
- Xt
exp
[
"2. ( n.
~~
dx-1=0
n -~
-'1)- 1
• -"""cr.+ o-
-1.
(24)
-CO
";
;
l..-~·-~--.
......
..
~-~~-·~--->"'-4. ,.,.,...,.,.,_.~--..,.
....
,.,....._,...E~~--·---"-""'"'m_,.
____
. - ...
~-~-~~-·_.,_,,_...,.,.
__
..-·-.._..._......
~·~·
..,._=->U·..,-o.,-,...
,_..,L................""""L"~,.,..,,..... __ ...~·-.. ·>O;.><o-.·i
9
• I
This equation can be shown to be valid only under the
condition
ere 2:•) =
-"
(h
n
-1
<r. -
~1
tl\ <r.
+
-'1.
(f
)-
1
(25)
(J",
which has the solution
=
(26)
•
The OC function of Eq. (21) can now be plotted as the
~of
ordinate and
Eq. (26) as abcissa for a range of h.
Consider the case of a communication problem where
«
=
(l
= • 01
with
0"0
= 1.000
and
cr.
=
1.414,
then
L(<r)=
(99)h
(99)h
-1
(.OlOl)h
(27)
~~~----------~-
and
(28)
•
Computed values of o- and L(
<J )
for different values of h
are shown in Table I and plotted in Figure 1.
Note,
; L 'Hospital's rule must be used to evaluate a- and for
't this case <l ( 0)
=
1.18 and 1(1.18)
= .5.
Other specific values of L( <:r ) from Figure 1 are
, given by 1(0) = 1,
1( <r,) = 0.01.
1(00) = 0 and
1(~o)
= .99,
r~-~ ~4-
----------------·--------·-·-------------·
••
-----;·----------------------------·-----~
!
I
I
L(<t')
h
1,0
Figure l.
OC function L(~) versus rms
voltage ¢ of true input parameter.
I
5
.62
2
• 87
1.00
1.00
1.00
1
1.00
1.08
.99
.91
.1
-.1
1.16
.61
1.20
-.5
1.29
.39
.09
-1
1.41
.01
!
-2
1.73
3.52
14.3
0
\
-10
cr
II
.45
-5
t.o
L(cr)
10
.5
.s
(J"
0
0
Table 1. Tabulated
values of OC function
L(~) and rms voltage
o (true input parameter) for a range of
parameter h
(-10 <. h <. 10).
-
-
I
I
I
!
I
!
i
!
I
I
I
I
I!
!
I
i
!
l
!
1
1-'
0
11
.
/
"'' ,.-,,_'> ,.~.• ..,>-.-•.<v.n--~·£.a,..,_,.,.,,,.,.,,.,,,~,,-•,_.,._"'~'~-- ">". '""'-"··.I•• .,_,_.
fT2S". .h-G-;.;ph'i;c;J.:"---illustration
Il
"'<'< =~"~'""-"''~"'.,--
,,.__._,_,
> --•~"",-.·'~_.., ~
'>?C ..,- •
,7>,
_ '> ~-,. "~><~C:"4""'"-'"' .'.'4-CJ-'<.,._,__.I.<c",_...,_,.._-~"''
of the test procedure
Consider the case where the number of observations k
1
lis measured along the horizontal axis and the observable
"
2 is measured along the vertical axis.
l:xi
i
i
j
This is shown
L:\
The boundary for deciding Ho is
lin Figure 2.
!
II\
lnL=
l-C!(
1 · [ I
I ] .t'""
+2
---
Q'a
n 1 n_
«....
cr,
tr,'-
•
)C.~
..
(29)
and test statistic is given by
(30)
[..!...
- _!._]
Cl.'
...
t
~
The boundary for deciding H1 is
(31)
:and the test statistic in this case is
L"'
•
x-l 2 =
~~-~)
ln '""C(
n ln
-
(J"o
o:c
(32)
•
"=I
'
2l. ( q.l
-
'
(f,'
)
Let L1 represent the upper threshold line and 1 0 the
;lower threshold line.
)thresholds.
These lines define the decision
Consider the case where the sum of squared
samples falls between 1 0 and L1•
The decision in this
region is to ask for more data and continue the test.
The
j
;slope of L1 and L0
l (29)
i
given the boundary conditions of Eqs.
,
and (31) is therefore given by
s
(33)
=
' -I
o-.\.
:and for the
i
<ro, cs-, values used here, s
= 1.39.
~--·""'···-"··-~·-··-- ..-~ .. ---.. ~ ...~~;
12
.
/
,.
--••=-·--..---·-.
;••·-~--~·-••·-·-----~--------~-
i
·----~--~.or---~-~·-·~~···-~·-••••·~•·""'"•--••••»•~•••··"',
l
l
(/.}
+>
.--!
::::!
(/.}
Q)
H
+>
(/.}
Q)
+>
~
.--!
CIS
•.-I
+>
s::Q)
(/.}
Q)
::::!
.--!
p.
0'
s
CIS
Q)
(f)
(f)
G--i
N
0
H
•
Q)
H
Q)
..a
::::!
s
b.O
•.-I
1%-t
~
•
•
•• •
••
•
•
•
•
0
. . oo#
~
~N
.--!
..
'd
'd
'
l-~-c~_..,__ ..,_ _ _ _.• ,...... ,~-~~-~-~·--·-----"""-------.-...-- .. --~.. _.,_ _ _ _-·------.-........... -~~-.........,._._...~_ _ ,.,_..,,__._.,__ _._, __ ,._,...~.~--.-~
13
• I
The intercepts with the vertical axis are given by
do
(!
=
2 ln T-GT
I
O"o"" -
and
(34)
I
q-, a
·-~
2 ln tiC
d1 =
(35)
--~,----~,------
<f.i -
q--,a
and this result is d 0 = -18.38 and d1 = 18.38.
The horizontal axis intercept can be shown to be k=l3.26,
i.e., n=l4 samples is the minimum number required before
H0 can be accepted and the test terminated.
D.
The Average Sample Number function (ASN function)
l.
Theory
The average number of samples required for a
sequential test to terminate is given by the ASN function.
This function evaluates test performance by specifying the
average number of samples necessary to terminate the
' sequential test given parameter
e .
Consider z(n) from Eq. (3), then, the expected value
of z(n) is nzi when n is the total number of tests or
l
i samples and
"'
z..
i.=•
z(n) =
where
z.J..
(36)
E ( ~de.)
p ( ~t eo)
Zi = ln
I
= ln
E• (Xj)
Po (xi)
(37)
•
Wald [l] has shown that n is a random variable, and
z
=n
Zi
•
(38)
Kendall [4] has derived Eq. (38) by introducing a
, new random variable Yi which equals l if Xi is observed
14
• I
.
..
...
j (i<n) and 0 if xi is not observed
r·····~ .,~·~·~~·-H--·~··~·-.,&·~·~--~~··-~~-·--·
i
.. .
.... .......................
Consider the
~·-·=·~.~-~-·-··~-~ ~"· ·~······~··--··'"·
(i~n).
··~·-
case when the test terminates at k=n, then
1.
I
z(n) =
i
i
"'= K
~
zi =
C.:. I
'
!with Yi=l if i
i
!
,.-·~-~-·-·~··~"-"_,.,.
~
n,
.a
L
t:
YiZi
(39)
I
Yi=O if i
> n.
The probability that
Yi=l is then given by
Pr[yi=l] = Pr[k~i] = Pk + Pk+l + Pk+2 +...
(40)
and Pk is the probability that the test terminates at the
'kth stage, etc.
zi, i=l, ••• ,k,
Note that Yi depends only on the values
and this alone determines whether an
observation is made at the (k+l) stage.
Hence, Yi and Zi
are independent and the expectation of z at termination is
therefore given by
410
Z = 2_ (YiZi) =
z 1..
\:t
-Zi
=
CIO
i:
":•
DO
( pk + pk+1 • • • ) = Zi Li pk = z·1.
=
n
(41)
.l;..
Equation (41) can now be rewritten in the following form
z(e)
.where
=zi(e)
z(e ),
zi(e)
n(e)
(42)
and n(e) are conditional
expectations on z,zi and n given parameter
e .
The expectation z(e) will take only two values
(neglecting any excess over the boundaries) and
~hey
are
! z (a ) = ln A for the probability that the test terminates
by accepting H1 when 9 is the true parameter, i.e.,
1-L( 9) and z ( 9 ) = ln B for the probability that the test
\terminates by accepting Ho when 9 is the true parameter,
'i.e., L(9). Hence, Eq. (42) can be written in the form
15
• I
(43)
[ 1-L ( e
n (6 ) =
!
! Recall,
I
·
)]
ln A + L (
e )
ln B
--~~~~~zl=.~c~e~)~~----------
(44)
•
from the previous development of L ( 9 ) ,
(45)
ln A = ln
i and
I
!
I
c(
0.
lnB=ln~
(46)
t-«
land by taking the expectations of Eq. (37) with 9 = CJ,
I
~-ic-&~) can be expressed as
I
_1.(\
')CT"L
2
--r:r.'L
jEquation
I
I
n(
(44)
~J
'0,'"
'I
(47)
•
can then be rewritten as
l_n_'_.:._~_+_L....;(._~---'-)_ln--..::.'~=-«;;___ _
ln ~ +
CJ,
!
(....!... - ..!._) u 1.
V0 '-
(j,.
:and for the values considered in this example, Eq.
I
Ii reduces to
!
n ( 0' )
:Equation
( - Q),
(48)
= --:::...[1_-_L....:(_<l__._)=-]
=
(48)
_1;;;;...._----::;.3...;.•..:;...99..:...0.;..._;;;L~(-Cl"-'-)_ _
(48)
• 25 q1.-
• 3465
is valid
for~
1.18) [ 1].
(49)
•
only in the interval
It can also be shown by the same pro-
cedure that n( <r) for the interval a= (1.18, + cCJ ) is
given by
n ( a- ) =
L ( 0' ) ln A + [ 1-L (
i( <r )
z
<S' ) ]
ln B
•
(50)
;
i
'
;.,,.,,,,..._-..,-""'-.'".<•,..•>.,...,.,_.,....,..-~~.-;...,_,_ _ _._,....._,~""-"
-···-·~~-·--~~,..,... ,..~~~~-··•-...
.. ~.o.>'...,.._·-"__,.~ ... ,..,.__,...,....,......... -....--~ ........~~"'"'"'·~-·-•&.'-~~... -~._,...,.r.,..,,_.,.,...._ ... ,.""'"~"" .. "'_,....,.....,..._~~-·'""""'~..,,.--""'<•"•
16
• I
The ASN function has been calculated using the values
of
and L(
0"
from Table I and its values are shown in
<r )
Table II and plotted in Figure 3.
(2)
Average Number of Observations
Recall that for h=O, Eq. (17) for the OC function
becomes undeterminate and Eqs. (44) and (49) as noted
previously are acceptable for all
value for which
h(~)
fS
:f q' where
vanishes, i.e.,
h(~')
G
1
is the
= 0.
The application of L'Hospital's rule to Eq. (15) at
point
~'
, yields
L( ~·) = lim
A~co
d
aFi
(51)
and by combining Eqs. (33), (34), and (35) into (48), the
result for n( c:r ) becomes
n( ~) =
Note that n(
L(tr) (do-dJ) + dJ
Q'1. - s
Q )~cO
(52)
•
where s is given by Eq. (33) for
t.
G' = s.
Wald [1] has shown that the limiting value is
(53)
n(v') = - ln A ln B
n2 ( ~')
:where
~
n 2 (G"') is t·he variance when ~a.= s.
Hence, by
calculating this variance, the result of Eq. (48), becomes
n ( cr') =
ln A ln B
= - dod]
(54)
17
• r
·~--~-~~~~.-.--...~---~~-~p
__..,___. ,_. . .__ __..,._ . . . . . .
~---~~---
.
__,-~--~
'
l
I
l
l
Q'
.45
.62
.87
1.0
1.08
1.16
1.18
1.20
1.29
1.41
1.73
3.52
14.3
Table II.
L( a-)
1.0
. 1.0
1.0
.99
.91
.61
.5
.39
.09
.01
0
0
0
[n(V')]
0
7
9
30
48
142
143
118
24
13
5
0
0
Tabulated values of'ASN
function for values of
and L( \'S'") from Table I.
'
•;,·.-<'' ,,,
,.,.,,.o,..:-••-~.-<--<'"'-~~q~•;..,~.• ,_.._v..,,.,~.,._,.,.,._.f...,o.oo:-.>"-="'-·'·<',......_"'-..,..,.....__,<S'"_._,_~,..,..._.~'"'--"""""'·-~""-"-k..,-.,...._ _ ,~-·-=·-.,.-,..,~--·_._,.•~~.,--___,._,,_ _ _.~,..,~YOO'""'c.OO"',._~.J
18
• I
:·-·-·--~····---------~--·-----·--··-------------------·------------------·-··-·-··--·-~·-1
,
150
I
t'\(~)
l
I
I
l
100
50
1.
Figure 3.
2.
ASN function
[n(q )] vs rms
voltage ~ of true input parameter.
19
• I
r-..
........, _____ ·"···- ·······-· ....
.,.........
Jand 143 trials is the calculated average number of observa""'~~""'"'·-~---~··--"""'".'"""··,·--·~~" ·~-~"~"-··-·-·-~---···"~-,·---~-~"""'·····=,.-~--~-~---·--····"·-~·-
~-··-·
··------·~·
ltions before a decision is made.
ljE.
Hardware Implementation
l
Blasbalg [5] has constructed a sequential filter for
!
/the discrete case.
A block diagram of a Gaussian sequen-
1
!tial filter system is shown in Figure 4.
Let fx be the
l
!l input
signal function.
This function is sampled by a pulse
j sampling process such that the output pulses xi have both
l
!amplitude and width proportional to the value of fx at the
i'
jtime of sampling.
Hence, the area under each pulse is
!proportional to xi 2 and by continuously integrating, the
I
1area is obtained as well as the sum of the samples.
!
The output of the voltage divider secion is given by
1\
L
·..::,
xi 2 (...!...- ...!.. ) = ( j)
(55)
cr.' fit,•
A step counter and an additional voltage divider is used
!
to obtain the following function
"'
L
i.:a
ln .5!:!.
= (1 )
( 56 )
qo
By summing (j) + (1), Eq. (47) is obtained and this
expression is then compared to (ln 1-P ) for one threshold
Cl(
and to (ln.!!..... ) for the other threshold.
If the cores-
l-ea&
J
ponding comparator output indicates that a decision has
! been reached, this result will then activate the corresponding flip-flop giving either Ho or H1 .
f" .. , .....
~
•
.......
-
····~
• ·-·-
~-·
•
·~
···~--- ···-----~---- -~· ···-··-~-------~------.-.---·-
_.................. ...........
,
_ _ _ _ _ <- _ _ _ _ __
F-
!
"
..
Zl"'s .. ~
.
Pulse
generator
1
I
I
.
... ~
Step
counter
I
...._
1--
r' Comparator
I
th
I
r
s
!
0
I
FF1
1
r-t R
H.
Summing
amplifier
I
!
f---.1
I
~
I
n~
fx
---i
Pulse
sampling
process
I
I!
I
I
·I
I
~
r---
.,._,.
r---t
l
<ro ..
i:::t
~~
I
J
(J)
'
I~,
i--
~
:
-1
1.....-t
R
Comparator
II
Ls
l t-
t\o
II
I
FF2
I
0 r-
I
!
'---
I
'
I
i
I
=~
!
1
Figure 4.
Discrete Gaussian Sequential Filter
1'\)
0
• I
j sequential theory for similar error probabilities
I
I and
I
parameters
~
and
~
e. and e. .
I
I,A.
l
j
Formulation of the problem
Define for each of the independent samples of a
l
l sequence
'
a slicing operator S, such that
x1 ,
(57)
i
jwhen xi< x 0
A new sequence of independent samples
•
I
j s 1 ,s2,•••sn is thus obtained and these new samples take
l
\values zero and one.
The Bernoulli distribution is given by
(58)
n!
n! n-k!
where P(x 0
)
=
Pr(x
~
x0
)
k = 0, 1, ••• n,
and Pn(k, x 0
)
is the probability
of choosing k ones and n-k zeroes for a sample of size n
and threshold x 0
•
Note that a new Bernoulli distribution
is obtained for each threshold x 0
•
The slicing operator S
thus converts the input variable xi to a variable whose
distribution is known except for parameter P(x 0
).
The
functional form of the output distribution is therefore
independent of the functional form of the input distribution.
The distribution function for the input is given by
J
410
P ( x/ 9 ) =
p(r /
(59)
e )dr
X
i
i~··~~··-~ . ~····. . . ..-~,. . .,,.~h•··~--,.. .,..,....,..,.,,.,,.._,.,,-..._...,.._,..,......,_"_="""'"·----·~=~--~--..--~-·~~...... """''~-~'"'"'---·~.....""'~.........._.,.,...,...,.__.._~-·-·~~....--·,._..._n,,__..__...._,_,_..=._,.,._.......~...""--~~·- ·---~~·•'-' ...............-~.......
21
i
22
•
J
B.
The Information Gain Function (IG function)
Blasbalg [6] defined the IG function as
(z,xiP 0
)
= P0 (x) log PJ(x)
+ [l-P 0 (x)] log 1-P,(x)
P 0 (x)
l-P 0 (x)
(60)
where z is given by Eq. (37) and the gaussian probability
distribution function is given by
cO
P(x)
=f
_!_ ~(-_.C )dr
Jfiift:T t.cr'\:
= erfc ( lt ) ,
(61)
~
)l
hence,
P0 (x) =
erfc (£.) ='erfc(x)
Q"o
and
P1(x) = P(x/e,) = erfc( :6.) = erfc(~ ).
o-.
{i
The threshold x 0 will maximize this IG function per
sample and can be obtained by plotting Eq. (60).
Table III and Figure 5.
X
(62)
(63)
See
The IG function is maximum at
= 1.0 and this corresponds to
Po = P0 (x) = erfc(l)
pl = P1 (x) = erfc(..!.. ) •
~
which results in values P0 = .1586 and P1 = .3174.
!C•
(64)
(65)
Bernoulli Detection Characteristics
The following likelihood ratio is obtained at the
output of the slicing operator S [6]
/\
= (Pl \k (1 - PJ \n-k
P0
)
1 - P0
(66)
)
where the Bernoulli type distribution has been used.
ilog function can be used in Eq. (66) to give
The
.·~ --.~
.......
--~"---·-
-·----~--·
......~-----~
··--~··· ""'•"'-~·-·-··-
........~~~~-----..--·--------------~-----------~--~---~-____,----··------------
..
~
~
I
!
!
i
!
I
I
2.5
I
o.o
I
.6
.9
erfc(x)
.477
.395
.204
.158
.034
.004
.0004
erfc(~)
fi..
.617
.548
.368
.317
.135
.045
.0124
.0026
-.0173
-.020
-.027
-.028
-.026
.014
-.0046
-.0011
[ (_).x)
fj:'J
I
I
1.5
3.0
.5
X
i
1.0
2.0
!
I
~
i
ii
!
I
II
&
i
!
Table III.
Tabulated values of the IG function for
different values of x.
II
I
I
~
~
i
I'
i
\
1\)
VJ
·-·····----------·--·····-----·--·····-···-·-··--·····----------··---------.....
l,,
rGF
1
.otS
.01.
I
• QIS
. 01
.oo5
.s
Figure 5.
\.0
t.S
1-0
1.-5
'X\
Information Gain Function of Bernoulli sequential
filter for Gaussian signal in Gaussian noise.
l\)
+-
25
• I
p •
lncl-P )
l-P 0
b =
ln(,~«)
(69)
ln(l-P )
l-P 0
.i
land
i
I
'
'\
I
I and
1
c = ln P,/PQ
lncl-P,)
l-P 0
(70)
by replacing these terms in Eq. (67), the following
i
!thresholds are obtained
!
ln
1\
'_b_
+
(71)
n
C+l
C+l
ln A.>~
C+l
n
C+l
+
•
(72)
;The random variable kin this case is the number of times
, in n trials that the samples exceed the threshold x 0 •
i
1
Hence, the optimum detector is a counter.
D.
The OC Function
The OC function for the Bernoulli case is defined by
Wald [1] to be
L(h) =
(
(
1o(
B
e.
)
,~(l )l
(73)
1
(
~
1-0C
)~
'and the random variable z given by Eq. (37) takes on two
J
26
.:
r·~
..
·-·-··-·~'"-·-·---~-~--~~-----"·"·
!values; ln(P 1
I
)
. . -.
.. ..
..
......................". . .
. . ..,. . . .
with probability q and ln(l - P,) with
M···--~-·
-~
-=~-~~- ··--~"~.-~
_,~--~--"·-·
_"·~-~·-~-·
Po
-~
~~- ~. ··-··-~· ·=···-·········~,
.
1 - Po
!probability (1-q).
Therefore, a parametric equation can be
I
!
)written in the following form
exp(zh)l q = (
.E.t) h
+ (1-q) {-l_P~, )h
P0
1
= 1
(74)
P0
I! and solving for q as a function of h gives
!
{ 1-P,
l
q(h) =
I
1 -
1-Po
f
(75)
----~h----1----h--------
Ii
( fL) -(---=EL)
P0
l
l-P 0
iValues of q(h) and L(h) for the Bernoulli case are tabui
/ lated in Table 4 and shown in Figure 6.
1
lI E.
'
The ASN function
(a) The general formula for the ASN function of a
I sequential
test was derived previously [Eq. (48)] and for
j
l the Bernoulli
I P(x/ e) = 1-q
Z(
e) =
case with P(x/ e
)
= q for X=l and
for x=O, the expected value of z is given by
q
ln p a (X )
P0 (x)
+ (1-q) · ln 1-P,(x)
(76)
l-P 0 (x)
; Hence, Eq. (48) for the Bernoulli trials case can be
{
l
rewritten as
n(q) =
L(q) ln B + [1-L(q)] ln A
q ln Pa (x)
+ (1-q) ln 1-~ (x)
P0 (x)
l-P 0 (x)
(77)
r
i
'
----~---'·--------~-.-----.:-....-·-··---~ ·--·-·-·~·---~-----··-----------·---1
L(h)
1.0 t - - - - -........----
h
q(h)
I L(h)
I
1\r
I
I
~
10
5
2
1
.5
~
.5
.1
-.1
-.5
-.1
-:2
-5.
-10
\
.0009
.0205
.1028
.1596
.1444
.2252
.2416
.2130
.3193
.4114
.640
.818
1.000
1.00
1.00
.99
.91
.61
.39
.09
.01
0
0
0
.45
.62
.87
1.00
1.08
1.16
1.20
1.29
1.41
1.73
3.52
14.3
I
I
~
!
~
!
j
I
I
!
I
!
!
Ii
i
I
.1
.2
.3
1.0
Figure 6.
The Bernoulli OC function
L(h) vs q(h) and a •
lI,
q(h)
2.0
tr
!'
Table 4. Tabulated value of h,
q(h), L(h) and ~
•
l
iI
!
[
~
1'0
---J
28
• I
(b)
Average Number of Observations
The average number of observations can be
·calculated from Eq. (54).
Wald [1] has shown that, using
Bernoulli trials, the slope s of the boundary lines L0 and
L1 can be obtained from
ln l-P 0 (x)
(78)
1-P (x)
s =
ln P, (x) - ln 1-P, (x)
P0 (x)
l-P 0 (x)
The mean square of the average number of observations
obtained for the case q = s with zero mean and variance a
,
is given by
n2( a')
= ln p, (x)
P0 (x)
ln 1-Pa(x)
l-P 0 (x)
(79)
Hence, Eq. (53) becomes
n(
~')
l n L l n \- (1
=
\- oc.
ln p (x)
P0 (x)
(80)
o(
ln 1-P (x)
l-P 0 (x)
and by using Eqs. (64) and (65), the average number of
observations is found to be 145.
case is shown in Figure 7.
The ASN function for this
r . . ·- · · - - -· - · - - - - - - -·-· - -----·· - · - ---·-·-· --·-·-·-------·· -----.... --·-· - ·-· - - --- _____________________________________-:_______________
~·····-----------------------,
I
!
IlI
I
I
-
n(~)
q
L(q)
ASN
<i
• 99
2
6
8
.62
1. 73
1. o
.3174
.01
58
1.41
.5
.5
145
1.18
1
o
1
o
1
1
.15 86
l
I
1
I
1
150
100
I
I
I
I
I'.
Il'
II
I
!
l
Talbe 5.
50
Tabulated value of
ASN for given q,
L(q), ~ •
q
.5
Figure 7.
I!
II
I
1.0
I
'''
The ASN function for
Bernoulli trials.
·-·~-·-----
(
......_,, __ ,., ___
--~
--~--~--·'-'--·"·---
_,... ___ -·
....._..__.....
'
-.~---·····
----~-------·----
·~-"-~-•MO
" ' ' ........
' ' • -··"'""-'"- .........
.:~~
l\)
'-()
• I
IV.
Discussion and Conclusion
The OC functions are found to be identical for both
the Wald and the Bernoulli tests as illustrated in
Figures 1 and 6.
The IG function has been examined in
Figure 5, in order to obtain the optimal threshold
(x 0
=
1.0) for the Bernoulli trials.
The introduction of the slicing operator leads
intuitively to a loss in information since Figure 2 indicates that for the Wald test, any sample may terminate the
'·/
test (e.g., choose H1 for sample above L1 ). However, the
Bernoulli test moves up steadily, only in fixed steps,
and cannot terminate for a large sample value as in
Wald' s test.
The ASN function shows that the Bernoulli sequential
test requires an average of 145 samples whereas, the Wald
test requires 143 samples.
Furthermore, by considering the
IG function, an optimal threshold x 0 is found and the difference between average number of samples required for
both tests to terminate is optimized.
The advantage of the Bernoulli method appears to
reside in the simplicity of the computation (e.g., hardware).
If automatic computation is available in the
Bernoulli case, it is possible that by using different
values for x 0 , the minimum average number of samples can
be reduced in an adaptive manner.
30
• I
I [lJ
Wald, A. Sequential Analysis, John Wiley and Sons,
New York, 1947.
1[2]
Wald, A. and Wolfowitz, J. Optimum Character of the
Sequential Probability Ratio Test. Ann. Math.
Statist., vol. 19, 326. 1948.
l [3 J
Bussgang, J.J. and Middleton. Sequential Detection
of Signals in Noise. Harvard Craft Lab Technical
Report 175, 1955.
[4]
Kendall, M.G. and A. Stuart. The Advanced Theory of
Statistics, vol. 2, Hafner Publishing.co., New
York, 1961.
[5 J
Blasbalg, H., Johns Hopkins University, Radiation
Lab, Technical Report, October 1954, p. 76.
[6]
Blasbalg, H., Johns Hopkins University, Radiation
Lab, Technical Report, January 1956, p. 5.
'
31
© Copyright 2026 Paperzz