Note on the Generation of Random Points Uniformly Distributed In

Note on the Generation of Random Points Uniformly
Distributed In Hyper-ellipsoids
Hongyan SUN
Dept. of Electrical and Computer Engineering
Royal Military College of Canada
Kingston, ON, Canada, K7K 7B4
M. Farooq
Dept. of Electrical and Computer Engineering
Royal Military College of Canada
Kingston, ON, Canada, K7K 7B4
[email protected]
[email protected]
Abstract - Various techniques for the generation of false
measurements uniformly distributed in a hyper-ellipsoid
with applications to target tracking are examined in this
paper. The drawbacks of the existing algorithms are
discussed and improved versions of these algorithms are
presented. The proposed algorithms are compared with
the original techniques through simulations.
Keywords: Target tracking, clutter, validation gate,
Poisson-distribution number, random point
1
Introduction
2
Problem Formation
Given an n-dimensional hyper-ellipsoid
ε rn = { y : y ' S −1 y ≤ γ }
(1)
where y is a vector of dimension n ; S is a symmetric
positive definite n × n matrix; γ is some positive
'
In Monte Carlo simulations of multi-target tracking
scenarios, it is essential to generate random points
uniformly distributed in a hyper-ellipsoid to represent a
clutter environment. Thus, it allows one to generate the
necessary false alarms, which exist in realistic tracking
scenarios. A number of researchers [1- 4] have presented
various techniques for generating the uniformly distributed
random numbers to represent the clutter. For example,
Bar-Shalom [1] has used a square with ten times the area
of a 2D ellipsoid to generate the required clutter. On the
other hand, X.R.Li [2] has proposed different methods in
spherical and Cartesian co-ordinates while generating the
random points. In reference [3], Ho and Farooq point out
drawbacks in Li’s methods and develop an algorithm to
generate the clutter points based on eigenvalues. However,
Dezert and Musso[4]have recently proposed a more
efficient method based on radial and linear transformations
to generate the uniformly distributed random points from a
hyper-sphere to a hyper-ellipsoid.
This paper examines all the techniques for the
generation of uniformly distributed random points in a
hyper-ellipsoid currently available in the literature [1-4]. A
number of improvements for the existing algorithms are
discussed in the paper. For the algorithms presented by Li
[2], a number of shortcomings have been further pointed
out and these drawbacks are supported through detailed
discussions and proofs. Moreover, the shortcomings of Ho
and Farooq’s algorithm [3] are also discussed in what
follows. The theoretical development of reference [4] is
reformulated in order to clarify the role of the radial
transformations. As a result of the analysis, a number of
ISIF © 2002
new and improved algorithms for the generation of
random points uniformly distributed in a hyper-ellipsoid
is included in the paper.
489
number; and denotes the transposition. In target tracking
applications, equation (1) is called a validation region or
~
∧
∧
a gate for the measurements, and y = z = z −z = z(k) −Hx(k | k −1)
is the innovations; S = S (k ) = H (k ) P(k | k − 1) H ' (k ) + R(k )
are the corresponding covariance matrix; γ > 0 determines
the size of the validation measurement threshold (see [1]
for further detail).
For a realistic environment, more than one
measurement will be found in the validation region due to
clutter, or background noise, or other similar targets in
the vicinity. For simulation purposes, it is assumed that:
(1) the number of false validated measurements can be
described by a suitable Poisson model; and (2) the false
validated measurements are uniformly distributed in the
gate and are independent from scan to scan. The problem
herein is to generate the incorrect measurements, i.e. to
obtain a Poisson distributed number of false
measurements first and then the random points are
uniformly distributed in a hyper-ellipsoid, ε r , for every
tracking instant.
n
3 The methods and associated problems
3.1 Y.Bar-Shalom’s method [1]
In example 6.4.2 of Chapter 6 in [1], “The Consistency
and Robustness of the Parametric and Non-parametric
PDAF”, the clutter was generated by using a square with
ten times the area of a 2D ellipsoid.
The efficiency of the method used to generate
random points distributed uniformly in the hyperellipsoids is equal to the ratio:
CB =
area of the ellipsoid SE
A
= ≈ v = 0.1 = 10%
area of the square SS 10Av
x2
B
x1
(2)
where Av = πγ S (k ) 1 / 2 is the area of a validation region for
a 2D measurement. This yields a very low value of efficiency
for simulation purposes. If the number of false measurements
per unit area is λ , then the actual number of false
measurements in the ellipsoid with area, Av , is
(3)
n ≈ Av λ
Based on equation (3), a more efficient method for
generating random points uniformly distributed in the
hyper-ellipsoid is described in what follows.
Objective function f i = xi
'
s.t. y S
−1
(5)
where i = 1,2,...n and y = [ x1 , x 2 ,..., x n ] .
'
For the 2D case, equations (4) and (5) can be expressed as
Objective function f i = xi , i = 1, 2
s.t. y ' S −1 y ≤ γ
(8)
(9)
area of the elliepsoid S E
=
SR
area of the rectangle
1/ 2
π  S 
4  s11s22 


Hence
=
(10)
The ellipsoid contact points between the ellipsoid
and the rectangle are the solution to the following set of
equations
∂Fi
(11)
=0
∂x j
(12)
 γ s22  ,
 γ s11  .
 max x2 = −min x2 = 

max x1 = −min x1 = 
2 
 s s − s2 
 s11s22 − s12 
 11 22 12 
1/ 2
The rectangle is generated by (A (min x1 , min x2 ) ,
B (min x1 , max x 2 ) , C (max x1, max x2 ) , D (max x1 , min x 2 ) ) and
is illustrated in Figure1.
490
=
π
4
(14)
1/ 2
 s11s22 − s122 


 s11s22 
<
π
= 78 .5 %
4
(15)
S R > 1.27 S E
The maximum efficiency of the proposed method is
approximately 8 times higher than that of the algorithm in
reference [1] for the 2D case. The technique can be easily
extended to the n dimensional case. The simulation
results using the proposed algorithm as well as that in
reference [1] are illustrated in Figures 4 and 5.
3.2
where i = 1,2 , j = 1,2,3 . Hence, yielding
1/ 2
(13)
S −1
A modified algorithm
−1
λ1 is the Lagrange multiplier.
φ = s11x12 + 2s12x1x2 + s22x22 + x32 - γ = 0
4γ (s11s22 )1/ 2
X 2 ∈U(minx2 , maxx2 ) .
(2) Let X = [ X 1 , X 2 ]' .
(3) If X ' S −1 X ≤ γ , then accept X as the desired uniformly
distributed random point in the ellipsoid, else go to step (1).
CS =
Therefore, the new objective function is given by
where
S R = (2 max x1 ) ∗ (2 max x2 ) =
The efficiency of the method is given by
where y = [ x1 , x 2 ] , and assume
'
Fi = fi + λφ = xi + λ1 (s11x12 + 2s12x1x2 + s22x22 + x32 − γ )
The area of the rectangle is
(1) Generate the uniformly distributed random variables
X i (i =1,2) in the rectangle ABCD: X 1 ∈U (min x1 , max x1 ) ,
(7)
in order to change the inequality into an equality. That is
s 
s
S −1 =  11 12 
s12 s22 
D
Figure 1. The rectangle that contains the ellipsoid
3.1.1
(6)
This problem can be solved using Lagrange multipliers.
Thus, a new argument x3 ≥0 is introduced into equation (7)
φ = y ' S −1 y + x 32 - γ = 0
A
For n ≈ Av λ , the number of false measurements in the
ellipsoid, at least m = (int[SRλ] +1) false measurements
need to be generated independently and distributed
uniformly in the rectangle.
(4)
y ≤γ
C
X.R. Li ‘s method [2]
Remark 1 As stated in the introduction of [2], “A
common mistake in generating random points uniformly
distributed over an n-dimensional hyper-ellipsoid is to
generate random points uniformly distributed in an ndimensional hyper-sphere first and then transform the
random points into the desired hyper-ellipsoid. A typical
example of such mistakes can be found in the algorithm
presented in [6]. The resultant random points of such an
algorithm are, however, no longer uniformly distributed in
the hyper-ellipsoid.”[2]
The following discussion demonstrates that the above
statement is false.
Lemma 1 If the random points distributed in a hyperellipsoid are generated from the random points uniformly
distributed in a hyper-sphere through a linear invertible
non-orthogonal transformation, then the random points
distributed in the hyper-ellipsoid are also uniformly
distributed.
Proof: The proof of the above lemma is very
straightforward and is omitted here for brevity. The result
of the lemma is further substantiated through the
simulation shown in Figures 6 and 7.
Remark 2 Algorithm 1 in [2] (Spherical Coordinates
Version for Hyper-ellipsoid) is as follows:
f β ( ρ , ϕ 1 ,..., ϕ n − 2 , θ )
 1  n
=
  n 2 ρ
 2π   γ
n −1

 n − 2
  ∏
  i =1




n − i −1
sin
ϕ i dϕ i 

sin
∫
π
0
n − i −1
ϕi
(16)
where 0 ≤ ρ ≤ γ , θ ∈[0, 2π ], ϕi ∈[0,π ] , i = 1,2,...,n − 2.
Proof:
There exists a transformation between the points
'
X = [ X1, X2,...,Xn ]' and β = [ ρ, ϕ1 , ϕ 2 ,...,ϕ n−2 ,θ ] :
X 1 = ρ sin ϕ 1 sin ϕ 2 ... sin ϕ n − 2 cos θ
X 2 = ρ sin ϕ 1 sin ϕ 2 ... sin ϕ n − 2 sin θ
X n -1
. ..
...
= ρ sin ϕ 1 sin ϕ 2
(17)
...
X n = ρ cos ϕ 1
where 0 ≤ ρ ≤ γ , θ ∈ [0, 2π ], ϕi ∈ [0,π ] , i = 1,2,..., n − 2.
Hence
fβ (ρ,ϕ1,...,ϕn−2 ,θ )
= f X (x1(ρ,ϕ1,...,ϕn−2 ,θ ),...,xn (ρ,ϕ1,...,ϕn−2 ,θ )) J (ρ,ϕ1,...,ϕn−2 ,θ )
 Γ(n 2 + 1)  n−1 n−2
 ρ sin ϕ1 sin n−3 ϕ 2 ...sin ϕ n−2
= 
n2 

 (πγ )
Step 1. Generate random numbers:
u ~ U (0,1) , θ1 ~ U (0,2π ) , θ j ~ U (0,π ) , j = 2,...,n −1
Step 2. Obtain the radius:
2
1
ρ = ( λ max γ u n ) 2
Step 3. Let v = [ρ,θ1 ,θ 2 ,...,θ n−1 ]' and denote by Y the desired
random vector. Proceed as follows.
If
ρ 2 ≤ γλ min then
Y =v
else
if
then
end
end
go to Step 1 for another trial or random point.
The following lemma shows that the transformation
between Cartesian and spherical co-ordinates does not
preserve the distribution.
If the random point, X =[X1, X2,...,Xn]' , in the
Cartesian coordinates is uniformly distributed within the
'
hyper-sphere, {x : x x ≤ γ } , with the probability density
Lemma 2
f X (x1, x2 ,...,xn ) ,
 n−2
 n
n −1  
  n 2 ρ   ∏
 γ
  i =1




π
n − i −1
∫0 sin ϕ i d ϕ i 
sin
n − i −1
ϕi
(18)
Equation (18) indicates the random variables θ , ρ,
ϕ1 ,...,ϕ n−2 are mutually independent and their probability
density distribution functions are:
v ' S −1v ≤ γ
Y =v
function,
 1
=
 2π
then
the
random
point, β =[ρ,ϕ1,ϕ2 ,...,ϕn−2 ,θ]' , in the spherical coordinate
f θ (θ ) =
1
2π
θ ∈ [0,2π ]
(19)
fρ (ρ ) =
n
ρ n −1
γn2
ρ ∈ [0, γ ]
(20)
f ϕ i (ϕ i ) =
sin n − i −1 ϕ i
∫
π
0
sin n −i −1 ϕ i dϕ i
ϕ i ∈ [0, π ],
(21)
where i = 1,2,..., n - 2
Therefore, in the spherical coordinates, the random point,
β = [ρ,ϕ1 ,ϕ2 ,...,ϕn−2 ,θ ]' , is not uniformly distributed in the
hyper-sphere, ρ ≤ γ . It should be pointed out that only
the random variable θ is uniformly distributed over
[0,2π ] , demonstrating that Remark 2 is false (i.e. θj , j=2,...,n−1
(related to the random point, X =[X1, X2,...,Xn ]' , in the
are not uniformly distributed in (0, π ) ). Based on lemma
2 (equation (21)), random variables,θj ~U(0,π) (j =2,...,n−1), in
Cartesian coordinates) is not uniformly distributed in the
hyper-sphere, ρ ≤ γ . The probability density function of
“algorithm 1” of [2] do not obey the given distribution
rathertheir distribution follows from equation (21).
β =[ρ,ϕ1,ϕ2 ,...,ϕn−2,θ]' in the spherical coordinate is given by
Remark 3 Algorithm 2 of [2] ( Cartesian Coordinates
Version for Hyper-ellipsoid) is as follows:
Step 1. Generate random numbers:
X i ~ U ( −γλmax , γλmax ) , i = 1,2,..., n ;
491
Step 2. Let X = [ X 1 , X 2 ,..., X n ]' and denote Y the desired
random vector. Proceed as follows.
X ' X ≤ γλmin
If
then
Y =X
else if
then
end
λ1 = λ2 = ... = λn = λmax ,
1  γλ 
C = ∑  i 
i =1 n  R0 
n
n/2
(25)
That is C is maximum, and
X ' S −1 X ≤ γ
Y =X
 γλ 
C =  1 
 R0 
end
go to Step 1.
Based on the efficiency of the acceptance-rejection
algorithm for the random point generation, the following
lemma can be stated.
n/ 2
 γλ 
= ... =  n 
 R0 
n/ 2
 γλ 
=  max 
 R0 
n/ 2
>1
(26)
This is contrary to the assumption that C ≤1. Hence,
equation (23) is not valid and lemma 3 is true.
In step 1 of the algorithm 2 in [2], the random point
X = [ X1 , X 2 ,..., X n ]' is first generated by
(27)
X i ~ U (−γλmax , γλmax ) , i = 1,..., n
i.e. the random points are uniformly distributed in the
hyper-cube, the size of each side of which is 2γλ max , and
Lemma 3
' −1
{y : y S y ≤ γ} ⊆{y : y y ≤ R,∀R > 0} ⇒
'
{y : y'S−1y ≤ γ} ⊆{y : y' y ≤ γλmax} ⊆{y : y' y ≤ R,∀R > 0}
This means that the hyper-sphere, { y : y ' y ≤ γλ max } ,
is the smallest of all hyper-spheres, { y : y ' y ≤ R, ∀R > 0} ,
that can cover the hyper-ellipsoid, { y : y ' S −1 y ≤ γ } , where
λ max is the maximum eigenvalue of S .
Proof: Suppose that uniformly distributed random points
in the hyper-ellipsoid, { y : y ' S −1 y ≤ γ } , are generated via
the acceptance-rejection method, that is if
−1
{y : y S y ≤ γ } ⊆ {y : y y ≤ R, ∀R > 0}
'
'
(22)
then
(1) generate the random point, Y , uniformly distributed
in {y : y ' y ≤ R}, and
(2) for given Y , if Y ' S −1Y ≤ γ , then Y is considered as the
desired random point, else, go to (1).
Lemma 3 is proved using the efficiency of the
acceptance-rejection method of the generation of a random
point as follows.
If lemma 3 is not true, i.e. ∃ R0 < γλmax, make
{y : y ' S −1 y ≤ γ } ⊆ {y : y ' y ≤ R0 } ⊂ {y : y ' y ≤ γλmax} (23)
then R = R0 in equation (23) , and the efficiency of method
is
C=
γ n S
=  n
 R0




1/ 2
 γλ 
= ∏ i 
i =1  R0 
n
1  γλ 
≤ ∑  i 
i =1 n  R0 
n
(24)
n/ 2



n/2
 γλ
=  2
 R0



n/2
which is not in the hyper-ellipsoid , {y : y ' S −1 y ≤ γ } .
For example, in the 2-dimensional case, suppose
x = [x1, x2 ]' , S =  1/ 6 1/12 , γ = 1 . Then this ellipsoid is
1/12 1/ 6 
' −1
and
λmax=1/ 4. The square from equation (27) is
{x : x S x ≤γ}
P ((−γλmax , − γλmax), (−γλmax , γλmax), (γλmax , γλmax), (γλmax , − γλmax))
= ((−1/ 4 , − 1/ 4), (−1/ 4 , 1/ 4), (1/ 4 , 1/ 4), (1/ 4 , − 1/ 4)) . It
can be tested that the random point E (1 + 2 , 1 + 2 ) is in
8
8
the ellipsoid, {x : x ' S −1 x ≤ 1} , but it is not in the square
P (( −1 / 4,−1 / 4), ( −1 / 4,1 / 4), (1 / 4,1 / 4), (1 / 4,−1 / 4))
(see Figure2).
(2) if γλ max > 1 , then γλmax < γλmax .
Hence
{x : x ' x ≤ γλmax}
⊂ {[x1 ,...xn ]' : xi ∈[−γλmax , γλmax ] , i = 1,..., n}
n
(where S = ∏λi )
Based on lemma 3, it is easily shown that
i =1
{x : x ' S −1 x ≤ γ }
and if and only if
 γλ 1

 R0
(1) if 0 < γλmax < 1/ 2 , then the hyper-ellipsoid, {y: y'S−1y ≤γ} ,
is not completely covered by the hyper-cube,
{[x1, x2,...,xn ]' : xi ∈[−γλmax,γλmax] , i = 1,..,n}, where all random
points from equation (27) are distributed, i.e. there is a
random point from equation (27) Xi ~U(−γλmax,γλmax), i =1,...,n,
⊂ {[x1 ,...xn ]' : xi ∈[− γλmax , γλmax ] , i = 1,..., n}
volume of the hyper - ellipsoid
volume of the hyper - sphere
1/ 2
it covers the hyper-ellipsoid, using the acceptancerejection method of the random point generation.
However the problem that occurs is:
 γλ
= ... =  n
 R0



n/2
⊆ {x : x ' x ≤ γλ max }
,
⊂ {[ x1 , x 2 ,...x n ] ' : x i ∈ [ − γλ max , γλ max ] , i = 1,..., n}
⊂ {[ x1 , x 2 ,...x n ] ' : x i ∈ [−γλ max , γλ max ] , i = 1,..., n}
that is
492
which indicates that if equation (27) is modified as
X i ~ U (− γλmax , γλmax ),
i = 1,...n
requires statistically more candidates” points (denoted by
X ) than algorithm 1 in [2] does (denoted by v ). The
efficiency ratio of Algorithms 1 and 2 in [2] increases
drastically as the dimension n increases:
(27)’
then the algorithm becomes more efficient (Figure 3 ).
Because the efficiency of the algorithm in the case of
equation (27) is
v olume of the hyper - ellipsoid
=
volume of the biggest hyper - cube
C 2 γλ max
(πγ )
=
(28)
γλ max
v olume of the hyper - ellipsoid
volume of the smallest hyper - cube
(πγ )
=
n 2
(29)
C2
γλmax
=
( 2 γλ max ) n
1) Generate the random variable θ ~ U [0,2π ] .
2) Generate the random variable
1


1
 <1
= 
n/2 
(
)
γλ
 max

( 2γλ max ) n
ρ = γ ν n , where ν ~ U [0,1] .
3) Generate the random variable ϕi ∈[0,π ], i = 1,2,...,n - 2
(30)
according to
That is
C 2 γλ max < C 2
γλ max
Modified algorithms
Algorithm B1
S
( 2 γλ max ) n Γ ( n 2 + 1)
C 2γλmax
3.2.1
Based on lemma 2, a more efficient algorithm can be
developed.
The efficiency ratio of the two cases is
C=
Proof: Based on Algorithms 1 and 2 in [2], equation (31)
is incorrect and should be re-written as:
efficiency of algorithm1  4γλmax 
=
 Γ(n 2 + 1)
efficiency of algorithm2  π 
S
( 2γλ max ) n Γ ( n 2 + 1)
=
(31)
n2
n 2
while the efficiency of the algorithm in the case of
equation (27)’ is
C2
efficiency of algorithm 1 4 n / 2 n
= ( ) Γ( + 1) ”
efficiency of algorithm 2 π
2
f ϕ i (ϕ i ) =
.
sin n − i −1 ϕ i
∫
π
0
sin
n − i −1
.
ϕ i dϕ i
This indicates algorithm 2 in [2] is not any more efficient
than the one based on equation (27)’ under the same
condition.
4) Use the transformation in equation (17) and obtain the
random point X =[X1, X2 ,...,Xn ]' uniformly distributed
in {x : x ' x ≤ γ } .
Remark 4 Remark (2) of algorithm 2 in [2] is as follows:
“ Algorithm 2 in [2] is less efficient than Algorithm 1 in
[2] since
5) Compute matrix A according to S = AA' (Cholesky
factorization).
6) Y = AX is the desired random point uniformly
distributed in the hyper-ellipsoid {y: y'S−1y ≤γ}.
This result has been simulated and illustrated in Figure11.
Based on lemmas 2 and 3, another algorithm can be
developed as (at next page)
−
B λmax ⊂ {Y : − γλmax ≤ Yi ≤ γλmax , ∀i}
where Y = [Y1 , Y2 ,...,Yn ]' and therefore to generate the same
number of random points Y in
S( 1 , 1 )
4 4
x2
ε rn , Algorithm 2 in [2]
{[x1, x2]' : xi ∈[− γλmax, γλmax], i =1,2}
E (1+ 2 , 1+ 2)
8
8
{[x1, x2]' : xi ∈[−γλmax,γλmax], i =1,2}
{x: x' S−1x≤γ}
{x : x' x ≤ γλmax}
x1
Figure2. 2D square and ellipsoid covered in the min-circle
Figure3. 2D ellipsoid, circle and squares
493
3.3
Algorithm B2
1) Generate the random variable θ ~ U [0,2π ] .
2) Generate the random variable
ρ = γλ max ν 1 n , where ν ~ U [ 0 ,1]
The algorithm in reference [3] is based on the conditional
probability density function expressed as
f X1 ,..., X n ( x1 , x 2 ,...x n )
3)Generate the random variable, ϕi ∈[0,π], i =1,2,...,
n-2
= f X 1 (x1 )f X 2 (x 2|x1 )...f X n (x n|x1 ,x 2 ,...xn −1 )
according to
f ϕ i (ϕ i ) =
sin n −i −1 ϕ i
∫
π
0
.
sin n −i −1 ϕ i dϕ i
4) Use the transformation in equation (17) and obtain the
random point X = [ X 1 , X 2 ,..., X n ]' uniformly distributed in
{x : x x ≤ γλ max } .
'
5)If X ' S −1 X ≤ γ , then Y = X is the desired random point
uniformly
distributed
in
the
hyper' −1
ellipsoid {y : y S y ≤ γ}, else, go to step 1).
Figure12 illustrates the simulation results.
Based on lemma 3 and the discussion in Remark 3 (see
Figure 3), the following algorithm can be realized:
Algorithm B3
2) Let X = [ X 1 , X 2 ,..., X n ] ' .
3) If X ' S −1 X ≤ γ , Y = X is the desired random point
uniformly distributed in the hyper-ellipsoid
{ y : y ' S −1 y ≤ γ } , else, go to step 1).
Figure 13 illustrates the simulation results for the
algorithm.
3.2.2 The efficiency of algorithms
The efficiency of algorithm B2 is given by
C B2
12
(32)
where S −1 = λ1 ( S −1 )λ 2 ( S −1 )...λ n ( S −1 ) , and λi ( S −1 ) is ith
eigenvalue of S −1 .
The efficiency of algorithm B3 is given by
volume of the hyperellipsoid
πn2 S
CB3 =
=
volume of the hypercute (4λmax (S))n 2 Γ(n 2 + 1)
Therefore
n2
efficiency of algorithm B2  4 
=   Γ(n 2 + 1)
efficiency of algorithm B3  π 
(35)
= ...
= f X n (x n )f X n −1 (x n −1|x n )...f X 1 (x1|x 2 ,x 3 ,...xn )
Equation (35) denotes the density function of the random
vector, X = [ X1, X2 ,..., Xn ]' , with n random variables, X1, X2,...,Xn ,
has n! probability combinations based on the conditional
probability of each random variable. Hence there are n!
possibilities to form the vector X = [ X1, X2,...,Xn ]' , making
the distribution dependent on the choice of the initial
variable.
The random points are not always uniformly
distributed over the edges in some axis direction of the
hyper-ellipsoid, λ1x12 +...+λn xn2 ≤γ , and the non-uniform
distribution edges are dependent on the choice of the first
random variable. This is illustrated in Figures 8 and 9.
3.3.1 An efficient modification
1) Generate random variable independently
X i ~ U (− γλmax , γλmax ), i = 1,2,...,n .
volume of the hyper - ellipsoid n  (λ min (S-1 )) 

=
= ∏ 
−1
volume of the hyper - sphere
) 
i =1  λ i ( S
Method of Ho and Farooq [3]
A new algorithm for the generation of random points
uniformly distributed in, εrn ={y : y'S−1y ≤γ}, is as follows.
1. Obtain the orthogonal matrix L such that equation
λ1
0

.
−1
−1
L S L = 
 .

 .
 0
0
λ2
0
0
...
0
...
...
0 
0 
 = Λ



λ n 
where each λi , 1 ≤ i ≤ n , is an eigenvalue of the matrix S−1.
2) Generate random points:
X i ∈ U ( −(γ λi )1 2 , (γ λi )1 2 ) , i = 1,2,..., n
3) Let X = [ X 1 , X 2 ,..., X n ] ' ,
4) If X T Λ X ≤ γ , then X = [ X 1 , X 2 ,...,X n ]' is the random point
uniformly distributed in the hyper-ellipsoid Y T Λ Y ≤ γ .
5) Y = LX is the desired random point uniformly
distributed in the hyper-ellipsoid, ε rn = { y : y ' S −1 y ≤ γ } .
6) Else, go to step 2.
(33)
Figure 10 illustrates the simulation results from this
algorithm.
3.4
(34)
The efficiency ratio increases exponentially as the
dimension n increases. Thus, algorithm B2 is more
efficient than algorithm B3. However, algorithm B1 uses
the direct transformation method and is therefore more
efficient than B2 and B3.
494
Method of Dezert and Musso [4]
Remark 6: In section 4.1 of [4], it can be easily
demonstrated that the p(r) = 1 n rather than p(r) = nzrnz −1
Vs z (1)
where V sn (1) represents
the
volume
of
v snz (1) .
Furthermore, “we want to show that z = ru is uniformly
distributed in v sn (1) which is equivalent to proving
z
z
p(z) =
1
1 nz , where
V (1) vs (1)
nz
s
1α denotes the indicator function
on set α .” The proof of the proceeding statement is given
in what follows:
Proof: According to the definition of the strictly radially
symmetric distribution in R n [8], it is only needed to prove
that: (1) if A( ru ) is distributed as ru for all orthonormal
n z × n z matrices A with defining function g (r ) ,
i.e. p( A(ru)) = p(ru) , where, p indicates the density
function, and (2) P(ru = 0) = 0 .
z
Based on the fact that u is uniformly distributed on
v (1) and r is independent of u implies that P(ru=0) =0.
nz
s
Based on condition (b) in section 4.1 of [4],
independent of u , hence p(ru) = p(r)p(u), and
r is
p( A(ru)) = p(r( Au)) = p(r) p(u) A−1 = p(r) p(u) = p(ru) for all
orthonormal nz ×nz matrices A and
u uniformly distributed
nz
s
on v (1) . Also, ru = r u = r , so, (1) is proved from (a)
in section 4.1 of [4]. This provides the proof of the
statement which is not included in [4].
4 Simulation and analysis of
algorithms
Simulations are carried out in a 2D space. The parameters
for the simulation are chosen as follows:
where γ = 9.2103 ,
(a) The ellipsoid, εrn ={y : y' S−1 y ≤ γ},
10 6  ,
S=

 6 10
random points in an ellipsoid, εrn ={y : y'S−1y ≤γ}, generated
through the linear transformation from a circle to an
(where S = AA' and A is
ellipsoid,
0  ,
3.1623
Y = AX = 
X
1.8974 2.5298
invertable
uniformly
Moreover, these random points are also
distributed
in
the
ellipsoid,
6 and 7 are from the algorithms in
[4], respectively.)
(3) The simulation results of the HF algorithm [3] are
depicted in Figures 8 and 9. Figure 8 reveals that when X1
is chosen first to generate random points, the resulting
random points over the edges along the major axis from
left to right of the ellipsoid are not uniformly distributed.
On the contrary, when X 2 is chosen first to generate
random points, the results in Figure 9 show the random
points over the edges along the minor axis of the ellipsoid
are not uniformly distributed. Therefore, the algorithm is
dependent on the initial choice of the random variable.
(4) Figure 10 illustrates the simulation results of the new
algorithm. It overcomes the drawbacks in the HF
algorithm [3].
(5) Figures 11 to 13 show the simulation results of the
modified algorithms related to X.R.Li’s paper [2]. The
simulations reveal that AlgorithmB1 is most efficient
among the three algorithms. Examination of Figure 11
reveals that all random points generated by Algorithm B1
will fall in the ellipsoid, εrn ={y : y' S−1y ≤γ}, especially, for
the 2D case, while that may not be the case for algorithms
B2 and B3.This is due to the fact of exploiting the
acceptation-rejection method.
εrn = {y : y' S −1 y ≤ γ }.(Figures
5 Conclusion
y = z − z0 , and z 0 = [100,100] ' ;
(b) The space density of false measurements: λ = 1 ;
(c) The gate probability is Pg = 0.99 and the detection
The generation of random points uniformly distributed in
a hyper-ellipsoid with target tracking application is
investigated in this paper. The existing algorithms [1-3]
are modified in order to arrive at efficient and more
accurate formulation of these algorithms. An appropriate
proof for the radial transformation in [4] is also presented.
Finally, simulation results are included to substantiate the
various claims.
probability is PD = 1 .
(d) For Figures 6-13, the number of random points
generated in the ellipsoid εrn = {y : y' S −1 y ≤ γ }
is taken to be N = 2000 .
The simulation results:
(1) Given (a)-(c), two hundred and thirty one (231) false
measurements uniformly distributed in the ellipsoid,
ε rn = {y : y' S −1 y ≤ γ }, were approximately generated. In the
case of reference [1], one has to generate approximately
2315 random points uniformly distributed in the related
square, while the new algorithm in section (3.1.1) needs to
generate about 369 random points uniformly distributed in
the related rectangle. One has to generate 6.27 times less
random numbers using the algorithm in section (3.1.1)
compared to that cited in reference [1]. These simulation
results are illustrated in Figures 4 and 5, respectively.
(2) Figure 6 shows random points uniformly distributed in
the circle, orn ={x : x' x ≤ γ}, while Figure 7 exhibits the
495
References
[1] Y. Bar-Shalom and T.E. Fortmann, Tracking and
Data Association, Academic Press, New York, 1988.
[2] X.R.Li, Generation of Random Points Uniformly
Distributed in Hyper-ellipsoids, Proc. of 1st IEEE
Conference on Control Applications, Dayton, OH, Sept.,
1992, pp. 654-658.
[3] T-J Ho and M. Farooq, An Efficient Method for
Uniformly Generation Poisson-Distributed Number of
Measurements in A Validation Gate, Proc. of 2nd
International Conference on Information Fusion’99,
Sunnyvale, Ca, July 6-8, 1999, Vol.2, pp. 749-754.
[5] Alberto Leon-Garcia, Probability and Random
Processes for Electrical Engineering, Addison-Wesley
Publishing, Company, 1989.
[4] Jean Deaert and Christian Musso, An Efficient
Method for Generation Points Uniformly Distributed in
Hyper-ellipsoids, Proc. of the Workshop on Estimation,
Tracking and Fusion: A Tribute to Yaakov Bar-Shalom,
Monterey, California, May 17,2001.
110
110
105
105
100
100
95
95
90
90
100
110
Figure4. False alarms
(Bar-Shalom)
[6] R.Y.Rubinstein, Monte Carlo Optimization,
Simulation and Sensitivity of Queuing Networks, John
Wiley and Sons, New York, 1986.
105
110
105
100
100
95
90
95
90
90
100
110 95
100
105
90
100
110
Figure5. False alarms
Figure6. False alarms
Figure7. False alarms
(algorithm in sec.3.1.1) (uniform distributed in a circle) (via linear transformation)
110
110
110
110
105
105
105
105
100
100
100
100
95
95
95
95
90
90
90
90
90
100
110
90
100
110
90
100
110
90
100
110
Figure8. False alarms
Figure9. False alarms
Figure10. False alarms
Figure11. Falsse alarms
(Ho and Farooq-1)
(Ho and Farooq-2)
(algorithm in sec.3.3.1)
(Algorithm B1)
110
110
105
105
100
100
95
95
90
90
90
100
110
90
100
110
Figure12. False alarms
Figure13. False alarms
(Algorithm B2)
(Algorithm B3)
496