On the Exact Asymptotic of Total Displacement for Covering a Unit

On the Exact Asymptotic of Total Displacement for
Covering a Unit Interval
Rafał Kapelko
Department of Computer Science,
Wrocław University of Technology, Poland
[email protected]
1
Motivation and algorithm
Consider n sensors placed randomly and independently with the uniform distribution
in on the unit interval [0, 1].
1
The sensors have identical sensing range equal to 2n
; thus a sensor placed at loca1
tion x in the unit interval can sense any point at distance at most 2n
either to the left or
right to x. We are interested in moving the sensors from their initial positions to new
locations so as to ensure coverage of the unit interval i.e., every point in the unit interval
is within the range of a sensor. What is a displacement of minimum cost that ensures
coverage?
Observe that the only way to attain the coverage is for the sensors to occupy the
1
anchor location ti = ni − 2n
, for i = 1, 2, . . . , n (see Algorithm 1).
Algorithm 1 M V (n) .
1
placed uniformly and indepenRequire: n mobile sensors with identical sensing radius r = 2n
dently at random on the interval [0, 1].
1
Ensure: The final positions of the sensors are at the locations ni − 2n
, 1 ≤ i ≤ n (so as to
attain coverage of the interval [0, 1].)
1: sort the initial locations of sensors; the locations after sorting x1 , x2 , . . . xn , x1 ≤ x2 ≤
· · · ≤ xn .
2: for i = 1 to n do
1
3:
move the sensor Si at position ni − 2n
4: end for
2
Preliminaries and notations
In this section we recall some facts that will be used in the asymptotic analysis.
We use the Euler Beta function
Z 1
B(a, b) =
xa−1 (1 − x)b−1 dx
0
which is defined for all positive numbers a, b. Let us notice that for integer number a, b
we have
a+b−1
B(a, b)−1 =
a.
(1)
a
Let us define a function ga:b (x) = xa−1 (1 − x)b−1 on the interval [0, 1]. We say that a
random variable Xa,b concentrated on the interval [0, 1] has the B(a, b) distribution with
parameters a, b if it has the probability density function f (x) = (B(a, b))−1 ga:b (x).
Hence
Z t
1
ga:b (x)dx.
(2)
Pr[Xa,b < t] =
B(a, b) 0
We will also use the following forms of Stirling’s formula (see [1, page 54], [2, Formula
9.40])
√
√
1
1
1
1
2πmm+ 2 e−m+ 12m+1 < m! < 2πmm+ 2 e−m+ 12m .
(3)
√
1
1
m! = 2πmm+ 2 e−m 1 + O
.
(4)
m
3
Estimations of expected sum
In this section we will present alternative proof of Theorem 1. Note that the Theorem 1
was proved in [3] .
Theorem 1. Assume that n mobile sensors are thrown uniformly and independently
at random in the unit interval. The expected sum of displacements of all sensors to
1
move from their current
location to anchor location ti = ni − 2n
, for i = 1, . . . , n,
√
respectively is Θ( n).
Proof. Let Di be the expected distance between Xi and the ith target sensor position,
1
, on the unit interval, hence given by:
ti = ni − 2n
Z ti
Z 1
n
Di = i
(ti − x)gi:n−i+1 (x)dx +
(x − ti )gi:n−i+1 (x)dx
i
0
ti
Z ti
Z 1
n
n
=2·i
(ti − x)gi:n−i+1 (x)dx + i
(x − ti )gi:n−i+1 (x)dx.
i
i
0
0
Rt
R1
Let Di,1 = 2 · i ni 0 i (ti − x)gi:n−i+1 (x)dx, Di,2 = i ni 0 (x − ti )gi:n−i+1 (x)dx.
Firstly we will estimate Di,2 . From the definition of Beta function and Identity (1) we
get
n
n
Di,2 = i
B(i + 1, n − i + 1) − i
ti B(i, n − i + 1)
i
i
i
i
1
=
− +
.
n + 1 n 2n
Pn
Therefore i=1 Di,2 = 0.
Now we will estimate Di,1 . By Lemma (2) with t = ti and Equation (2) we have
i
1
2i(n − i + 1/2) n i
−
Pr[Xi,n−i+1 < ti ] +
t (1 − ti )n−i .
Di,1 = 2
n(n + 1) 2n
n(n + 1)
i i
Therefore
n
X
i=1
n
X
2
1
i
−
Pr[Xi,n−i+1 < ti ]
n(n + 1) 2n
i=1
n
X
2i(n − i + 1/2) n i
+
ti (1 − ti )n−i .
n(n
+
1)
i
i=1
Di,1 =
Observe that the first summand contribute Θ(1) term. Thus the asymptotic depends on
the expression given by the second summand. For the second summand it is sufficient
√ to
exactly repeat the proof of (see [3, Lemma 2]). Hence the second summand is Θ( n).
This completes the proof of Theorem (1).
t
u
Lemma 2 Let i, n be natural numbers. Assume that i ≤ n. Then for t ∈ (0, 1) we have
Z
= t−
i
n+1
Z
t
(t − x)xi−1 (1 − x)n−i dx
0
t
xi−1 (1 − x)n−i dx +
0
1 i
t (1 − t)n−i+1 .
n+1
Proof. The equality can be obtained using integration by parts. It is sufficient to apply
integration by parts for functions f (x) = (1 − x)n−i+1 and g(x) = xi . Then using the
identity (1 − x)n−i+1 = (1 − x)n−i − x(1 − x)n−i we rewrite the first integrate as the
sum of two integrates. This easily completes the proof of Lemma 2.
t
u
Remark 3 Notice that, Lemma 2 follows from the following known identity of incomplete Beta function (see [4, Identity 8.17.20])
Γ (a + b − 1)
(1 − z)b z a−1 ,
Γ (a)Γ (b)
I (z; a, b) = I (z; a − 1, b) −
where
1
I (z; a, b) =
B(a, b)
3.1
Z
z
xa−1 (1 − x)b−1 dx.
0
Exact asymptotic
In this subsection we give exact asymptotic in Theorem 1. We prove that expected sum
Γ ( 1 +1) √
n + Θ(1). We begin with the following lemma which will
of displacement is 22√2
be helpful in the proof of Theorem 5.
Lemma 4 Let ti =
i
n
−
1
2n .
Then
n
X
2i(n − i + 1/2) n
i=1
n(n + 1)
i
tii (1
n−i
− ti )
=
Γ
1
2
+1 √
√
n + O(1).
2 2
i
n
1 i
1 n−i
Proof. Let Ei = 2i(n−i+1/2)
. We divide the sum into
1 − ni + 2n
n(n+1)
i
n − 2n
four parts:
√
√
b nc
n−b nc
n
n−1
X
X
X
X
Ei +
Ei + En .
(5)
Ei =
Ei +
i=1
√
b nc+1
i=1
√
n−b nc+1
We approximate the four parts separately. It is easy to see that En = Θ n1 = O(1).
For the first and third term, we use Stirling’s formula (3) for m = n, m = i and
1
1
1
m = n − i, as well as Inequalities e 12n < e, e 12i+1 + 12(n−i)+1 > 1 to deduce that
i n−i
2e 1 p
n
1
1
1
Ei ≤ √
.
(n − i)i
1+
1−
1+
n+1
2(n − i)
2i
2(n − i)
2π n3/2
Applying the basic inequality (1 + 1/x)x < e, when x ≥ 1 for x = 2(n − i) we have
3e3/2 1 p
3e3/2 1
√
Ei ≤ √
(n
−
i)i
≤
.
2π n3/2
2π n1/2
Therefore
√
b nc
X
Ei +
i=1
n−1
X
√
n−b nc+1
Ei = O(1).
Hence the first, third and fourth term contribute
O(1) and the asymptotics
depends on
√
√
the second term. For the second term (b nc + 1 ≤ i ≤ n − b nc) we use Stirling’s
formula (4) for m = n, m = i and m = n − i to deduce that
i n−i
r
1 + O n1
i
n
n
1
i
=
1−
=√
1
i
n
n
2π i(n − i) 1 + O 1
1
+
O
i
n−i
1
√
2π
r
(n − i)i n
n+1
n3/2
n
i(n − i)
1+O
1
√
n
.
Hence
2
Ei = √
2π
p
1−
1
2i
i 1+
1
2(n − i)
n−i+1 1+O
1
√
n
.
Now we apply the approximations ln(1 + x) = x + O(x2 ), ex = 1 + O(x) and get
n−i+1
1 i
1
1 − 2i
1 + 2(n−i)
= 1 + O √1n . Therefore
2
Ei = √
2π
p
(n − i)i
1
√
1
+
O
.
n
n3/2
√
(n−i)i
n3/2
Using the inequality
√
b nc
X
p
i=1
≤
√1
n
we get
(n − i)i
+
n3/2
n
X
√
n−b nc+1
p
(n − i)i
= O(1).
n3/2
Therefore, we can add the terms back in, so we have
√
n−b nc
n
X
2
Ei =
Ei + O(1) = √
√
2π
i=1
b nc+1
X
n
1
1 Xp
1+O √
(n − i)i + O(1)
n
n3/2 i=1
The remaining sum we approximate with integral. Hence
n p
X
Z
(n − i)i =
n
p
x(n − x)dx + ∆,
0
i=0
Pn
with
i=0 maxi≤x<i+1 |f (x) − f (i)|. Observe that, the function f (x) =
p |∆| ≤
x(n − x) is monotone increasing over the interval [0, n/2] and monotone decreasing over the interval [n/2, n]. Hence the error term |∆| telescopes and |∆| = O(n).
Notice that
√
Z np
1
π
x(n − x)dx = n2 Γ
+1
.
2
4
0
Putting all together we deduce that the second term contributes
√
n−b nc
2
Ei = √
√
2π
b nc+1
X
√
1
1
1
π
2
1+O √
n
Γ
+
1
+
O(n)
+ O(1) =
2
4
n
n3/2
Γ
1
2
+1 √
√
n + O(1).
2 2
t
u
This easily completes the proof of Lemma 4.
Theorem 5. Assume that n mobile sensors are thrown uniformly and independently at
random in the unit interval. The expected sum of displacements of algorithm M V (n) is
Γ ( 21 +1) √
√
n + O(1).
2 2
Proof. First of all we observe that the asymptotic depends on the sum
n
X
2i(n − i + 1/2) n
i=1
n(n + 1)
i
tii (1 − ti )n−i .
(see proof of Theorem 1). Finally the result follows from Lemma 4.
t
u
4
Mathematica code
The following Mathematica code can be used to simulate Algorithm 1.
CH2 [ n ] : = B l o c k [ {M1, L1 , M2, L2 =0} ,
M1= S o r t [ RandomReal [ { 0 , 1 } , n ] ] ;
L2=L2+Sum [ Abs [M1 [ [ i ] ] − ( i / n − 1 / ( 2 ∗ n ) ) ] , { i , 1 , n } ]
];
SR [ n , I P ] : = Mean [ T a b l e [ CH2 [ n ] , { I P } ] ] ;
p o i n t s [ m ] : = B l o c k [ { d a t a ={}} ,
Do [ d a t a =
J o i n [ d a t a , T a b l e [ { n ˆ 2 , SR [ n ˆ 2 , 2 0 ] } , { n , 4 , 6 0 , 1 } ] ] ,
{ c o u n t e r , 1 ,m} ] ;
data
];
P l o t [ { ( Gamma [ 1 / 2 + 1 ] / ( 2 ∗ S q r t [ 2 ] ) ) ∗ S q r t [ n ] } , {n , 2 , 3 6 0 0 } ,
P l o t S t y l e −>D i r e c t i v e [ Thick , B l a c k ] ,
E p i l o g −>{ P o i n t S i z e [ Medium ] , P o i n t [ p o i n t s [ 1 0 ] ] } ,
T i c k s S t y l e −>D i r e c t i v e [ Black , 3 5 ] , A x e s O r i g i n − >{0 ,0} ,
A x e s S t y l e −>D i r e c t i v e [ T h i c k ] ]
References
1. W. Feller. An Introduction to Probability Theory and its Applications, volume 1. John Wiley,
NY, 1968.
2. R. Graham, D. Knuth, and O. Patashnik. Concrete Mathematics A Foundation for Computer
Science. Addison-Wesley, Reading, MA, 1994.
3. E. Kranakis, D. Krizanc, O. Morales-Ponce, L. Narayanan, J. Opatrny, and S. Shende. Expected sum and maximum of displacement of random sensors for coverage of a domain. In
Proceedings of the 25th ACM symposium on Parallelism in algorithms and architectures,
pages 73–82. ACM, 2013.
4. NIST Digital Library of Mathematical Functions. http://dlmf.nist.gov/8.17.