The Entropy of Random Finite Sets

The Entropy of Random Finite Sets
Mohammad Rezaeian and Ba-Ngu Vo
Department of Electrical and Electronic Engineering,
University of Melbourne, Victoria, 3010, Australia
rezaeian, [email protected]
Abstract— We define the Boltzmann-Gibbs entropy of random
finite set on a general space X as the integration of logarithm
of density function over the space of finite sets of X , where the
measure for this integration is the dominating measure over this
space. We show that with a unit adjustment term, the same value
for entropy can be obtained using the calculus of set integrals
which applies integration of densities that have unit. Extending
this concept of entropy into conditional entropy and mutual
information, we use the fundamental result on the relation of the
mutual information and the variance of filtering error to derive
a counterpart result for the signals and systems with finite set
nature in a special case. While the expression for the BoltzmannShannon entropy of discrete Poisson distribution is by an infinite
series, we show that for a uniform Poisson random finite set
which has Poisson cardinality distribution with mean of λ, the
entropy is equal to λ nat.
I. I NTRODUCTION
In standard systems theory, the system state is a vector
that evolves in time and generates (vector) observations.
Examples of this so-called single-object system span various
disciplines, ranging from econometric to biomedical engineering. Arguably, the most intuitively appealing application is in
radar tracking, where the state vector contains the kinematics
characteristics of a target moving in space and the observation
is a radar return. The state of a wireless channel that has only
one path, in conjunction with the received training signal as
observation, is another example of a single-object system.
A natural generalization of a single-object system that has
wide applicability to many practical problems is a multiobject system where the (multi-object) state and (multi-object)
observation are finite sets of vectors [1], [2]. A multi-object
system is fundamentally different from a single-object system
in that not only individual (vector) states evolve in time, but
the number of these states also changes with time due to birth
and deaths of objects. The observation at each instant is a
set of (vector) observations only some of which originated
from the underlying objects. Moreover, there is no knowledge
about which object generated which observation. Primarily
driven in the 1970’s by applications in radar, sonar, guidance,
navigation, and air traffic control (see [3] and associated references), today, multi-object system, has found applications in
many diverse disciplines, including oceanography [4], robotics
[5], biomedical research [6] and telecommunications [7]. In
telecommunication, the multi-object system can represent the
state of a multipath wireless channel. The number of paths and
their vector characterization varies with time, and the received
training signal is statistically depended on this random set.
Uncertainty in a multi-object system is characterized by
modelling multi-target state and measurement as random finite
sets, analogous to using random vectors for (vector) state and
measurement in single-object systems. A random finite set is,
in essence, a finite-set-valued random variable. However, a
finite set is fundamentally different from a vector. Standard
tools, and concepts such as probability density, optimal estimators, entropy etc. for random vectors are not directly applicable
to random finite sets.
Point process theory is a rigorous mathematical discipline
for dealing with random finite sets that has long been used by
statisticians in many diverse applications including agriculture,
geology, and epidemiology [8]. Finite set statistics (FISST) is
a set of practical mathematical tools from point process theory,
developed by Malher to address multi-object systems [2].
Innovative multi-object filtering solutions derived from FISST
such as the Probability Hypothesis Density (PHD) filters
[2],[9],[10] have attracted substantial interest from academia
and as well as being adopted for commercial use.
Analogous to entropy for random vector, entropy for random
finite set is of fundamental importance in multi-object system.
However, an operational notion of entropy is not yet well
understood for random finite sets. To the best of the authors
knowledge, the only work in this area is that of Mahler who
extended the Kullback-Leibler and Ciszar discrimination for
random finite sets [1]. Unlike a random vector, a random finite
set has uncertainty in cardinality (discrete) and uncertainty in
positions (continues). The entropy of a random finite set should
encapsulate both of these uncertainties.
In this paper we consider the measure theoretic representation of random finite sets based on a known dominating
measure and derive the Boltzmann-Gibbz entropy [11] based
on this measure. Considering a general form of representation
of density for RFSs we derive a breakdown of entropy for a
RFS as the summation of components related to various types
of uncertainty for a RFS, and entropy for some known RFS
will be derived. We also present a set integral formulation
of entropy based on the corresponding density representation.
Extending the definition of entropy to conditional entropy and
mutual information, we apply result of [12] to derive a relation
between mutual information and mean square error for RFS
estimation in a special case. In the next section we briefly
discuss various representations of RFS, as a special case of
probability space, specially representation of various RFS by
density. The above mentioned contributions are discussed in
Sections III and IV.
II. R ANDOM F INITE S ETS
A random finite set (RFS) is simply a random variable that
take values as (unordered) finite sets, i.e. a finite-set-valued
random variable. For completeness a formal definition of an
RFS is provided in the following.
A random finite set X on X ⊆ Rd is a measurable mapping
X : Ω → F(X )
(1)
where Ω is a sample space with a probability measure P
defined on a sigma algebra of events σ(Ω), and F(X ) is
the space of finite subsets of X , which is equipped with the
myopic or Matheron topology.
At the fundamental level, like any random variable, an RFS
is completely described by its probability distribution. The
probability distribution of the RFS X on X is the (probability)
measure P on F(X ) defined by
P(T ) = P r(X ∈ T )
(2)
for any Borel subset T of F(X ), where X ∈ T denotes the
measurable subset {ω ∈ Ω : X(ω) ∈ T } of Ω.
In many applications it is more practical to describe a RFS
via probability density, rather than a probability distribution.
The notion of a density is intimately tied to the concept of
measure and integration. To describe an RFS by a density, we
need to have a measure on F(X ). In point process theory
we use the dominating measure µ∗ , which we define and
characterize in the following.
The set of all probability densities on a space Y, corresponding to a measure m is denoted by
D(Y, m) = {f = dP/dm : P (Y) = 1},
where dP/dm denotes the Radon-Nykodym derivative of P
relative to m. For any Borel subset of S of X , let λu (S)
denote the volume measure of S in units u, and Ku , λu (X )
is
R assumed to be finite. The measure λ(A) = f λu (A) ,
f (x)λu (dx), where f is the constant α/Ku per unit volume
A
u, for a chosen constant α, defines the unitless Lebesgue
measure which we denote by λ. For this measure λ(X ) = α.
So α is a unitless scalar that we associate to the volume of the
whole space X when measured by λ. Similarly for the product
Lebesgue measure λru on the space X r (the rth Cartesian
product of X ), by letting f ≡ αr /Kur , we obtain the rth
product unitless Lebesgue measure λr , where λr (X r ) = αr .
∗
The dominating
P∞ ∗ measure µ on F(X ) is defined to be
∗
µ (T ) = r=0 µ (Tr ), where Tr is the partition of T that
consists of only sets in T that have cardinality r. We consider
a mapping χ that maps vectors to sets by χ([x1 , ..., xr ]T ) =
{x1 , ..., xr }. For any set {x1 , ..., xr } ∈ Tr , there are r! vectors
on X r (corresponding to different orderings). Therefore we
measure the size Tr by λr (χ−1 (Tr ))/r!. Since λr is unitless
for any r, we can add up these measures for different rs.
Accordingly we have (using the convention X 0 = {∅})
µ∗ (T ) =
∞
X
λr (χ−1 (T ) ∩ X r )
r=0
r!
.
(3)
Any density f ∈ D(F(X ), µ∗ ) defines a RFS with the
probability distribution P = f µ∗ . This is similar to the
conventional way of defining a continuous random variable
X on a space X by a density f ∈ D(X , λu ). 1 Using (3),
Z
P(T ) =
f (x)µ∗ (dx)
T
Z
∞
X
1
=
1T (χ(x1 , ..., xr ))f ({x1 , ..., xr })λr (dx1 ...dxr ),
r!
r
X
r=0
(4)
Note that P(F(X )r ) is the probability that the cardinality of
RFS X defined by f be r,
P r(|X| = r) = P(F(X )r )
Z
1
=
f ({x1 , ..., xr })λr (dx1 ...dxr ). (5)
r! X r
A. Density representations of various RFSs
Since λr (X r ) = αr , from (3),
µ∗ (F(X )) = eα ,
(6)
therefore the simplest (uniform) density on F(X ) is a density
f that everywhere in F(X ) is the constant e−α ,
f ({x1 , ..., xr }) = e−α .
(7)
The RFS corresponding to this density is called the uniform
Poisson RFS2 . The cardinality distribution of such RFS is
Z
e−α
e−α αr
P r(|X| = r) =
λr (dx1 ...dxr ) =
,
(8)
r! X r
r!
which is Poisson distribution, and α is the expected number
of points for RFS X, i.e: α = E(|X|). In general a Poisson
RFS is defined by
f ({x1 , ..., xr }) = Kur e−α f1 (x1 )...f1 (xr )
(9)
R
for a given f1 ∈ D(X , λu ), i.e: X f1 (x)dx = 1 (uniform
Poisson distribution is when f1 (x) ≡ 1/Ku ). The function
v(x) = αf1 (x) is called intensity function
which is sufficient
R
to describe Poisson RFS because X v(x)dx = α = E(|X|).
Denoting K = Ku /α, density of (9) can be written as
f ({x1 , ..., xr }) = K r e−α v(x1 )...v(xr ), and also as
f ({x1 , ..., xr }) = K r r!P (r)f1 (x1 )...f1 (xr )
(10)
where P (r) , P r(|X| = r) is Poisson distribution in
(8). A more general RFS is i.i.d cluster process defined by
(10) in which P (r) is any arbitrary probability distribution
on {0, 1, 2, ...}. A Bernoulli RFS is when P (r) in (10) is
Bernoulli over {0, 1}.
1 Note that if for a random finite set we restrict f to be nonzero only on
F(X )1 (the part of F(X ) that represents sets of cardinality one), then due to
F(X )1 = X the two concepts become identical, and as such, a real-valued
random variable is a special case of a random finite set.
2 Or the density f ({x , ..., x }) = p for any 0 < p < 1 is a uniform
r
1
Poisson RFS with Poisson cardinality distribution which has expectation of
− log p.
Note that a density depends on the measure, and in our
measure µ∗ there is one free parameter α. The parameter K
in density (10) shows the dependency of density on α. Since
λu (S)
u (X )
K = λλ(X
) = λ(S) , for any S ⊂ X , hence K represents the
unit that we measure the space. It also makes the density of
f unitless, because unit of f1 is u−1 .
The product form in (10) implies independence of occurrence of points. The most general form of density of a RFS
that allows the occurrence of points be also dependent on each
other is
f ({x1 , ..., xr }) = K r r!P (r)fr (x1 , ..., xr ),
r
(11)
, λru ), r
=
for a given set of symmetric densities fr ∈ D(X
1, 2, ... . Symmetry of f means that any permutation of its arguments won’t change its value, and it is zero if its arguments
are not distinct. As a result, a RFS can be defined completely
by a discrete distribution P (r) and a set of symmetric densities
fr ∈ D(X r , λru ), r = 1, 2, ... .
For the unitless Lebesgue measure we have for any A ⊂ X ,
λ(A) = λu (A)/K,therefore λr (dx1 ...dxr ) = K −r dx1 ....dxr ,
and from (11), Equation (4) reduces to
∞ Z
X
P(T ) =
fr0 (x1 , ..., xr )dx1 ...dxr .
r=0
χ−1 (T )∩X r
where fr0 (x1 , ..., xr ) = P (r)fr (x1 , ..., xr ). Note that fr0 is not
a density with respect to λu , but similar to fr , it is symmetric
and has unit of u−r .
1) Belief functional and set integral: For a closed subset
S ⊂ X,
∞ Z
X
r
β(S) , P(χ(∪∞
S
))
=
fr0 (x1 , ..., xr )dx1 ...dxr
r=0
r=0
Sr
(12)
is the probability that the random finite set has a realization
that all its points belong to S, i.e: β(S) = P r(X ⊂ S).
Defining,
f 0 ({x1 , .., xr }) , r!fr0 (x1 , ..., xr ) = K −r f ({x1 , .., xr })
(13)
The right hand side of (12) is referred to as the set integral of
f 0 , denoted in general as
Z
Z
∞
X
1
f 0 (X)δX ,
f 0 ({x1 , ..., xr })dx1 ...dxr (14)
r!
r
S
S
r=0
Note that f 0 (like fr0 ) has unit of u−r which makes the
summation in (14) possible.
According to (12) and (13), we have (see [13] for technical
details of proof),
Theorem 1: For a RFS defined by the density function f ,
Z
β(S) =
f 0 (X)δX,
(15)
S
where f 0 ({x1 , ...xr }) = K −r f ({x1 , ...xr }) has unit of u−r .
The function β(.) is called the belief functional of the
corresponding RFS. This functional plays important role in the
finite set statistics (FISST) approach to multi-target filtering
introduced in by Mahler [1]. For modelling of multi-target
system, the belief functional is more convenient than the
probability distribution, since the former deals with closed
subsets of X whereas the latter deals with subsets of F(X ).
The belief functional is not a measure and hence the usual
notion of density is not applicable.
In Finite set statistics (FISST), it is also shown that the
f 0 (X) in (15) (hence f (X)) can be obtained with a reverse
operation, set derivation from β(.) (see [1] for further details).
Therefore a RFS X can also be defined by the belief function
β(S) for any S ⊂ X . We refer to f (X) as the density of RFS
X and f 0 (X) = K −|X| f (X) as the FISST probability density
of X. FISST converts the construction of multi-target densities
from multi-target models into computing set derivatives of
belief functionals. Procedures for analytically differentiating
belief functional have been developed and described in [1].
III. T HE E NTROPY OF RFS
In this section we define the Boltzmann-Gibbs entropy for
RFS and break it down to various components, and also derive
a set integral relation for entropy.
Measure Theoretic definition of the Entropy
For a given unitless measure m on a polish space Y,
corresponding to any (unitless) density of f ∈ D(Y, m), the
Boltzmann-Gibbs (differential) entropy is defined as [11]
Z
h(f ) =
η(f )dm.
Y
+
where the function η : R → R is,
(
−x log x x 6= 0,
η(x) =
0
x = 0.
For a RFS on X , with density f ∈ D(F(X ), µ∗ ) the
Boltzmann-Gibbs entropy will be
Z
h(f ) ,
η(f )dµ∗ .
(16)
F (X )
We refer to this expression as h(X), where X has density f .
Theorem 2: For a RFS X on X with the general form of
density function f in (11),
h(X) = H(|X|) + E(h(fr )) − E log(|X|!) − E(|X|) log K̄.
(17)
R
where h(fr ) = − X r fr log(ur fr )dx is the differential
entropy3 of the density fr in
X r , K̄ = K/u is a unitless
P∞
quantity, and E log(|X|!) = r=0 P (r) log(r!).
3 Here for mathematical clarity, since the argument of log function should
be a unitless value, and fr has unit of u−r , we neutralize the argument
by including unit of ur in the argument. Subsequently the last term in (17)
will also have the log of a unitless value. This appearance of unit u in the
expression for differential entropy is not essential and it is commonly implied.
Alternatively, weR can consider differential entropy for only unitless densities
f¯ by h(f¯) = − f¯ log(f¯)λ(dx), where λ is the unitless Lebesgue measure.
In this notation, we should write Eh(ur fr ) as the second term in (17), where
ur fr is unitless. Also, note that in (16) µ∗ and f are unitless.
Proof: Using (3),
Z
∞
X
1
h(X) =
η(f ({x1 , ..., xr }))λr (dx1 ...dxr ). (18)
r!
r
X
r=0
which proves the following Theorem on the entropy by set
integral.
Theorem 3: For a RFS X on X with the FISST density f 0 ,
Z
h(X) = −
f 0 (X) log(u|X| f 0 (X))δX − E(|X|) log K̄.
Substituting with f in (11), we have
X
(21)
Z
∞
X
1
r
−r
We
remark
that
the
set
integral
can
only
be
formulated
on
the
h(X) =
η(K r!P (r)fr (x1 , ..., xr ))K dx1 ...dxr
0
−|X|
r!
r
FISST
densities
f
(X)
which
has
unit
of
u
,
and
not
on
X
r=0
∞
the densities f (X) = K |X| f 0 (X) which is unitless (not to be
X
=−
P (r) log(u−r K r r!P (r))
considered as a way to avoid the extra term −E(|X|) log K̄).
r=0
Relation (21) shows that the expression
Z
Z
∞
X
fr (x1 , ..., xr ) log(ur fr (x1 , ..., xr ))dx1 ...dxr
−
P (r)
−
f 0 (X) log(u|X| f 0 (X))δX
Xr
r=0
X
(19) cannot be considered as the entropy of a RFS with a FISST
0
The last summation is E(h(fr )) and the first summation density f , but an additive term corresponding to unit adjustment
−E(|X|)
log K̄ is also required. It is easy to show that
breaks down to the other terms in (17).
similar
to
the
differential
entropy, if we change a RFS Y on
The four terms in (17) for entropy of a RFS shows various
X
=
R
to
aY
,
(change
of
unit) we need to adds such term to
types of uncertainty for a RFS. They represent various amount
the
entropy,
i.e:
of information that we require to know the realization of
X with certain accuracy. The first term, H(|X|) is the inh(aY ) = h(Y ) + E(|Y |) log |a|.
formation about the number of point in the random set X.
λu (X )
Knowing this number is r, identifying the position of the Here K̄ = Ku /(uα) = uλ(X ) is also a change of unit where
points with the consideration of order, and with accuracy of we have change volume of X from Ku to α.
In contrast to entropy, for mutual information and KLalmost a unit volume, for each point requires h(fr ) bits of
divergence,
which are the difference of two entropy and expecinformation, and so the average of information about position
tation
of
ratio
of densities, respectively, the adjustment term
of points will be Er (h(fr )). But since for random set X
−E(|X|)
log
K̄
will be cancelled out, and the set integral can
order is not important, this second term has extra information
be
used
directly
in
the definition of these measures substituting
of E(info(order)) which must be deducted. The number of
the
usual
integration
(see for example [1, (14.164)]).
possible combinations for r points is r!, and given r the
1
information about one particular
P∞ combination is − log( r! ), Entropy of example RFSs
therefore E(info(order)) = r=0 P (r) log(r!). The last term
For the i.i.d. cluster point process X, h(fr ) = rh(f1 ), and
in (17) is a mediator that corrects the reference of differential the entropy reduces to
entropy to make it possible to add it to the discrete entropy.
h(X) = H(|X|) + E(|X|)(h(f1 ) − log K̄) − E log(|X|!).
Note
second term can be written
P that this term plus the
r
r
(22)
as
r P (r)(h(fr ) − log K̄ ), but log K̄ is the differential
r
In
particular
for
a
Bernoulli
RFS
with
P
(1)
=
q,
entropy of a uniform distribution over X , hence the difference
h(fr )−log K̄ r corrects the reference h(fr ) to that of a uniform
h(X) = hb (q) + q(h(f1 ) − log K̄),
distribution depending on the unit measuring the space.
where hb (.) is the binary entropy function. Also as a speDefinition of entropy by set integral
cialization of (22), for a Poisson RFS X defined
R by intensity
In general the set integral of a set function f 0 over X which function v(.), by
defining
parameters
α
=
vdx, f1 (x) =
R
has unit volume u is defined by (14) where the set function v(x)/α, Ku = X dx, and substituting E(|X|) = α, K =
f 0 (X) must have unit of u−|X| .
Ku /α and
Using the set integral definition from (14) for the following
∞
X
e−α αn
e−α αn
function with f 0 ({x1 , .., xr }) = K −r f ({x1 , .., xr }),
H(|X|) = −
log
n!
n!
Z
n=0
0
|X| 0
−
f (X) log(u f (X))δX
= α − α log α + E log(|X|!),
(23)
X
=−
∞
X
r=0
Z
P (r)
Xr
in (22), we have,
fr (x1 , ..., xr ))
h(X) = α(1 + h(f1 ) − log(Ku /u)) nat.
r
log(r!P (r)u fr (x1 , ..., xr ))dx1 ...dxr
∞
X
= H(|X|) + E(h(fr )) −
P (r) log(r!)
r=0
= h(X) + E(|X|) log K̄.
(20)
In the special case of uniform Poisson RFS, h(f1 ) = log Kuu ,
hence the entropy is just h(X) = α nat. Although the entropy
of the discrete Poisson distribution in (23) is expressed by an
infinite series, the entropy of a Poisson RFS is by a simple
computable expression.
IV. C ONDITIONAL ENTROPY AND MUTUAL INFORMATION
Here we extend the definition of entropy to mutual information using conditional entropy. Assuming two random finite
sets X, Y are dependent, and the conditional density of X
given realization y for Y be fx|y , then h(X|Y = y) is defined
as
Z
h(X|y) =
F (X )
h(X|y) =
∞
X
r=0
1
r!
η(fx|y )dµ∗x
Z
∞
X
P (r)
Yr
r=0
Z
r
Xr
η(f ({x1 , ..., xr }|y))λ (dx1 ...dxr )
=
∞
X
λr (dx1 ...dxr )λr (dy1 ...dyr )
Z
1
f (y1 , ...yr )
η(f ({x1 , ..., xr }|{y1 , ...yr }))
r! X r
λr (dx1 ...dxr )λr (dy1 ...dyr )
Z
f (y1 , ...yr )h(Xr |y1 , ..., yr )λr (dy1 ...dyr )
P (r)
Yr
r=0
=
∞
X
P (r)h(Xr |Yr )
(28)
r=0
For each r, using the result of (27), we have,
Z
h(X|Y ) =
F (Y)
h(X|Y ) =
Z Z
∞
X
1
f ({y1 , ...yr })η(f ({x1 , ..., xr }|{y1 , ...yr }))
( )2
r!
Yr X r
r=0
=
(24)
This is a function of the set y. The conditional entropy h(X|Y )
is the average of this function with respect to the density of
fy .
Z
object as randomized displacement. In this case (25) can be written
as
F (X )
η(fx|y )fy dµ∗x dµ∗y
Z Z
∞ X
∞
X
1
f ({y1 , ...yt })η(f ({x1 , ..., xr }|{y1 , ...yt }))
=
t!r! Y r X t
t=0 r=0
λr (dx1 ...dxr )λt (dy1 ...dyt )
(25)
d
1
h(Xr |Yr ) = − σ 2r ,
dγ
2
σ 2r =
1
min
r! g
|Xr | = |Yr | = r,
(29)
Z
Z
||g([y1 , ..., yr ])
xr
Yr
− [x1 , ..., xr ]||2 fx|y fy λr (dx1 ...dxr )λr (dy1 ...dyr )
(30)
Mutual information between the two random finite sets X, Y is
the reduction in the average residual uncertainty about X after
observation of Y .
and the minimizer g is the conditional mean estimator knowing r
and γ. From (28) and (29), we have
I(X; Y ) = h(X) − h(X|Y ).
X
d
1
I(X; Y ) = σ 2 , where σ 2 =
P (r)σ 2r .
dγ
2
r=0
∞
Relation between mutual information and estimation error
In this section we extend the result of [12] on the filtering and
mutual information on conventional signal processing to random
finite sets in a special case.
For√X, Y as random vectors of a vector Gaussian channel where
Y = γHX + Z,and Z has independent standard Gaussian entries,
[12] has shown
d
1
I(X; Y) = σ 2 ,
(26)
dγ
2
where
Z Z
σ 2 = min
g
||Hg(Y) − HX||2 fx|y fy dxdy,
and the minimizer is the conditional mean estimator.
For a given density fx (fixed source distribution), h(X) will be
fixed, and then in the above formulation,
d
d
1
I(X; Y) = − h(X|Y) = σ 2 .
dγ
dγ
2
(27)
In contrast to random variables, for random sets the sum of two
random sets is not well defined. We can only define sum for the
special case that the cardinality of RFS in the sum be always the
same, and consider a special ordering of elements.
√
Here we consider the case Y =
γX + Z where X, Y, Z
are RFS with the constraint that the cardinality of Z is always
equal to the cardinality of X. We assume that Z has independent
Gaussian elements and the summation is with one randomly selected
order, uniformly chosen. This as a multiple object system with unit
probability of detection and zero probability of false alarm, but we
have random number of objects and a noisy observation of those
(31)
R EFERENCES
[1] R. P. S. Mahler. Statistical Multisource Multitarget Information Fusion.
Artech House, 2007.
[2] R. P. S. Mahler, ”Multi-target Bayes filtering via first-order multi-target
moments”, IEEE Trans. Aerospace & Electronic Systems, Vol. 39, No. 4,
pp. 1152-1178, 2003.
[3] M. L. Skolnik, (Ed.): Radar handbook, McGraw-Hill, 1990, 2nd edn.
[4] D. M. Lane, M. J. Chantler, and D. Dai, “Robust tracking of multiple
objects in sector-scan sonar image sequences using optical flow motion
estimation,” IEEE J. Ocean. Eng., 23, (1), pp. 31–46, 1998.
[5] H. Durrant-Whyte and T. Bailey, “Simultaneous localisation and mapping:
Part I” IEEE Robotics and Automation Magazine, vol. 13, no. 2,pp. 99110, 2006.
[6] B. Hammarberg, C. Forster, and E. Torebjork, “Parameter estimation of
human nerve C-fibers using matched filtering and multiple hypothesis
tracking,” IEEE Trans. Biomed. Eng., 49 (4), pp. 329–336, 2002.
[7] E. Biglieri and M. Lops, “Multiuser detection in a dynamic environment
- Part I: User identification and data detection,” IEEE Trans. Information
Theory, vol. 53, no. 9, pp. 3158–3170, Sep 2007.
[8] S. Dietrich, W. Kendall, J. Mecke, Stochastic geometry and its applications, Chichester ; New York : Wiley, c1995.
[9] B.-N. Vo, W.-K. Ma, ”The Gaussian mixture Probability Hypothesis
Density filter,” IEEE Trans. Signal Processing, IEEE Trans. Signal
Processing, Vol. 54, No. 11, pp. 4091-4104, 2006.
[10] R. P. S. Mahler, ”PHD filters of higher order in target number”, IEEE
Transactions on Aerospace and Electronic Systems, Vol. 43, No. 4, pp.
1523-1543, 2007.
[11] W. Slomczynski. ”Dynamical Entropy, Markov Operators and Iterated
Function Systems”, Wydawnictwo Uniwersytetu Jagiellonskiego, ISBN
83-233-1769-0, Krakow, 2003.
[12] D. Guo, S. Shamai, S. Verdu, ”Mutual Information and Minimum MeanSquare Error in Gaussian Channels” , IEEE Transaction of Information
Theory, VOL. 51, NO. 4, APRIL 2005.
[13] B.-N. Vo, S. Singh, A. Doucet, ”Sequential Monte Carlo Methods for
Multi-Target Filtering with Random Finite Sets”, IEEE Transactions on
Aerospace and Electronic Systems, VOL. 41, NO. 4 OCTOBER 2005.