1
A Byzantine Attack Defender: the Conditional
Frequency Check
Xiaofan He† and Huaiyu Dai†
† Department of ECE
North Carolina State University, USA
Email: {xhe6,hdai}@ncsu.edu
Abstract—Collaborative spectrum sensing is vulnerable to the
Byzantine attack. Existing reputation based countermeasures will
become incapable when malicious users dominate the network.
Also, there is a scarcity of methods that fully explore the Markov
property of the spectrum states to restrain sensors’ statistical misbehaviors. In this paper, a new malicious user detection method
based on two proposed Conditional Frequency Check (CFC)
statistics is developed with a Markovian spectrum model. With
the assistance of one trusted sensor, the proposed method can
achieve high malicious user detection accuracy in the presence
of arbitrary percentage of malicious users, and thus significantly
improves collaborative spectrum sensing performance.
I. I NTRODUCTION
Various collaborative spectrum sensing schemes have been
proposed to overcome the unreliability of single user spectrum
sensing [1]. Along with all the benefits, collaborative spectrum
sensing also induces security vulnerabilities [2], among which
the Byzantine attack [3] (a.k.a. spectrum sensing data falsification (SSDF) attack [4]) is the focus of this paper.
Many existing defenses against Byzantine attacks are reputation based, e.g., [5–8]. In this type of methods, lower
reputations will be assigned to sensors that deviate from the
global decision to mitigate the negative effects of the malicious
sensors. However, the underlying assumption is that the global
decision is correct, which may not be true when malicious
sensors dominate the network. In fact, it has been shown in
[3] [9] that when Byzantine attackers in the network exceed
a certain fraction, such reputation based methods become
completely incapable.1 Non-reputation based approaches have
also been proposed, such as [10–12]. However, these methods
still rely on the correctness of the global decision and hence
only investigate the scenarios where a small fraction of users
are malicious. When the majority are not trustworthy, global
decision independent approaches are more suitable. Such type
of works include the prior-probability aided method proposed
in [13], and the user-centric misbehavior detection presented
in [14].
In practice usually there is memory in the spectrum state
evolvement, and the spectrum occupancy is more precisely
modeled by a Markov model. Most of the existing methods
either consider the i.i.d. spectrum state model for simplicity
This work was supported in part by the National Science Foundation under
Grants CCF-0830462, ECCS-1002258 and CNS-1016260.
1 When all sensors have the same spectrum sensing capability, the reputation
based methods cannot mitigate the effect of Byzantine attacks if more than
50% sensors are malicious [3].
Peng Ning‡
of CSC
North Carolina State University, USA
Email: [email protected]
‡ Department
(e.g., [10,12,14]), or focus their analysis on one time slot
and ignore the correlation between the spectrum states at
consecutive time slots (e.g., [3,5–8,11]). In [13], the Markov
property of the spectrum is incorporated into the malicious
user detection algorithm; however, it is generally difficult to
obtain the required prior knowledge of the true spectrum in
practice.
In this paper, a global decision independent method, the
Conditional Frequency Check (CFC), is proposed based on a
Markov spectrum model to combat the Byzantine attacks. In
particular, two CFC statistics, which explore the second order
property of the Markov model, are constructed in this paper.
The corresponding analysis proves that these two proposed
CFC statistics together with an auxiliary hamming distance
check are capable of detecting any sensor that misbehaves.
In addition, two consistent histogram estimators based on the
history of sensors’ reports are also developed for these two
CFC statistics. With the aid of one trusted sensor, the proposed
method is capable of detecting any malicious sensor with high
accuracy regardless of the portion of malicious ones in the
sensor group, without requiring any prior knowledge of the
spectrum and sensing models.
The rest of this paper is organized as follows. Section
II formulates the problem. The proposed malicious sensor
detection method and the corresponding theoretical analysis
are presented in Section III. Some supporting simulation
results are presented in Section IV, and Section V concludes
the paper.
II. P ROBLEM F ORMULATION
In this paper, the following scenario is considered: 1) The
true spectrum has two states, i.e., 0 (idle) and 1 (occupied), and
follows a homogeneous Markov model with state transition
matrix A = [aij ]2×2 (i, j ∈ {0, 1}) where aij , P r(st+1 =
j|st = i) and st denotes the true spectrum state at time t. The
stationary state distribution is denoted by π = [π0 , π1 ], which
satisfies πA = π. In addition, it is assumed that the Markov
chain of spectrum states is in equilibrium. 2) One trusted
honest sensor exists and is known by the fusion center. 3) All
sensors, including malicious ones, have the same spectrum
sensing capability, i.e., identical detection probabilities Pd ’s
and false alarm probabilities Pf a ’s.2 4) An honest sensor will
send its local sensing result directly to the fusion center. 5)
2 This is a common assumption in literature (e.g., [3] [10]). Defense to more
intelligent and powerful attackers remains a future work.
2
A malicious sensor, however, will tamper its local inference
before reporting to the fusion center. In particular, she will flip
local inference from 0 to 1 and 1 to 0 with probabilities ϕ01
and ϕ10 , respectively. The flipping probabilities ϕ may not
necessarily be the same for different malicious sensors. From
the fusion center’s viewpoint, the equivalent detection and
false alarm probabilities of a malicious sensor with flipping
probabilities ϕ , [ϕ01 , ϕ10 ] are given by
(M)
Pd
= (1 − ϕ10 )Pd + ϕ01 (1 − Pd ),
(1)
(M)
Pf a = (1 − ϕ10 )Pf a + ϕ01 (1 − Pf a ).
(2)
Proposition 1: For the Markov spectrum model considered
in this paper, any sensor that survives the CFC can pass the
FC.
Proof: A malicious sensor can pass the FC as long as
(M)
(tr)
(M)
(tr)
P r(rt
= 1) = P r(rt = 1), where rt
(rt ) denotes
the malicious (trusted) sensor’s report at time t. However, she
(tr)
(M)
(tr)
(M)
needs to achieve Ψ0 = Ψ0 and Ψ1 = Ψ1 to survive
the CFC.
(tr)
(tr)
Note that P r(rt = i) = P r(rt−1 = i) (i ∈ {0, 1}) when
(tr)
=
the true spectrum states are in equilibrium, and P r(rt
(tr)
(tr)
(tr)
(tr)
1) = Ψ1 P r(rt−1 = 1) + (1 − Ψ0 )P r(rt−1 = 0).
Consequently, for any sensor that survives the CFC, we have
If a malicious sensor attacks, i.e., {ϕ01 , ϕ10 } 6= {0, 0}, her
statistical behaviors will deviate from that of the honest sensor.
The objective of this paper is to detect the malicious sensors
by observing their statistical deviations.
(M)
P r(rt
(M)
= 1) =
III. T HE P ROPOSED M ETHOD
A. Conditional Frequency Check
According to the preceding model, a malicious sensor has
two degrees of freedom, i.e., two parameters ϕ01 and ϕ10 , in
launching an attack. The convectional frequency check, which
detects malicious sensors by computing their frequencies of
reporting 1 [10], enforces only one constraint to the attacker’s
behavior as indicated in Eq.(6) below. This is insufficient to
prevent the malicious sensor from attacking. However, when
the true spectrum states are Markovian, the proposed CFC can
enforce two constraints by exploring the correlation between
consecutive spectrum states, and consequently identify any
flipping attack easily. In particular, the CFC consists of two
statistics as defined below.
Definition 1: The two conditional frequency check statistics
of a sensor are defined as Ψ1 , P r(rt = 1|rt−1 = 1), and
Ψ0 , P r(rt = 0|rt−1 = 0), respectively, where rt denotes
the sensor’s report at time t.
According to the definitions, these two statistics are related
to the model parameters as
Ψ0 =
π 0 Pf a + π 1 Pd
,
(3)
π1 a11 (1 − Pd
+
.
π0 (1 − Pf a ) + π1 (1 − Pd )
(tr)
2 − Ψ1
= P r(rt
In the CFC, the fusion center will evaluate Ψ1 and Ψ0 for
every sensor and compare the resulting values with those of
the trusted sensor. If the values are sufficiently different, the
corresponding sensor will be identified as malicious. In the
following, the effectiveness of this statistical check is demonstrated through two analytical results, followed by a practical
approach to estimating these two statistics that eliminates the
requirement of any prior knowledge about the sensing and
spectrum models.
= 1),
(5)
ϕ01 (π0 (1 − Pf a ) + π1 (1 − Pd )) = ϕ10 (π0 Pf a + π1 Pd ).(6)
(M)
When (6) holds, define g1 (ϕ10 ) , (π0 Pf a +π1 Pd )·(Ψ1
After some algebra, it can be shown that
−
(tr)
Ψ1 ).
(7)
g1 (ϕ10 ) = ϕ210 κ21 [π0 π12 a00 − (π1 a10 + π0 a01 )π1 π0
+π1 π02 a11 ] + ϕ10 κ1 [−2π1 π0 a00 Pf a
+(π1 a10 + π0 a01 )(Pf a π0 − Pd π1 ) + 2π1 π0 a11 Pd ]
P
−P
where κ1 = (π0 (1−Pffaa)+π1d(1−Pd )) .
Note that the malicious sensor can pass the CFC only if she
could find a ϕ∗ = [ϕ∗01 , ϕ∗10 ] that satisfies both g1 (ϕ∗10 ) = 0
(M)
(tr)
(i.e., Ψ1 = Ψ1 ) and (6). Denote ϕ∗10 as the non-zero root
of g1 (ϕ10 ) = 0, which can be found as:
ϕ∗10 = −
(4)
(tr)
− Ψ0
which implies that this sensor can also pass the FC.
a Pf a +a01 P d
6= 21 , a malicious sensor
Proposition 2: If 10 a10
+a01
can never pass the CFC if she attacks, i.e., {ϕ01 , ϕ10 } 6=
a Pf a +a01 P d
= 12 , an active malicious sensor can
{0, 0}. If 10 a10
+a01
pass the CFC only if she sets {ϕ01 , ϕ10 } to {1, 1}.
Proof: According to Proposition 1, passing the FC is a
necessary condition for a malicious sensor to pass the CFC.
(M)
(M)
(M)
Thus, ϕ must satisfy π0 Pf a + π1 Pd
= P r(rt
= 1) =
(tr)
P r(rt = 1) = π0 Pf a + π1 Pd . Considering (1) and (2), this
implies the following linear constraint on ϕ01 and ϕ10 :
π0 a00 (1 − Pf a )2 + (π0 a01 + π1 a10 )(1 − Pd )(1 − Pf a )
π0 (1 − Pf a ) + π1 (1 − Pd )
)2
(M)
− Ψ0
1 − Ψ0
(tr)
The proposed malicious sensor detection method consists
of two phases: 1) conditional frequency check (CFC), and 2)
an auxiliary hamming distance check (HDC).
π0 a00 Pf2a + (π0 a01 + π1 a10 )Pd Pf a + π1 a11 Pd2
(M)
2 − Ψ1
(tr)
=
Ψ1 =
1 − Ψ0
ξ2
,
κ1 ξ1
(8)
where ξ1 = π0 π12 a00 − (π1 a10 + π0 a01 )π1 π0 + π1 π02 a11 and
ξ2 = −2π1 π0 a00 Pf a + (π1 a10 + π0 a01 ) · (π0 Pf a − π1 Pd )
+2π1 π0 a11 Pd .
According to (6) and (8), ϕ∗01 is given as
ϕ∗01 = −
where κ0 =
Pf a −Pd
(π0 Pf a +π1 Pd ) .
ξ2
,
κ0 ξ1
(9)
3
Consider the relation πA = π, (8) and (9) can be simplified
as
ϕ∗10 = 2 −
ϕ∗01 =
2(a10 Pf a + a01 P d)
,
a10 + a01
(10)
2(a10 Pf a + a01 P d)
.
a10 + a01
(11)
As a direct consequence of (10) and (11), ϕ∗10 + ϕ∗01 = 2 must
hold if the malicious sensor wants to pass the CFC. On the
other hand, 0 ≤ ϕ∗01 , ϕ∗10 ≤ 1 by definition. These two condi2(a10 Pf a +a01 P d)
tions imply that {ϕ∗01 , ϕ∗10 } exists only if
=1
a10 +a01
∗
∗
and the corresponding {ϕ01 , ϕ10 } equals {1, 1}. Otherwise,
there is no valid non-zero solution for both g1 (ϕ10 ) = 0 and
(6). That is, the malicious sensor cannot pass the CFC if she
attacks.
Define the error function e(ϕ) , ||Ψ(tr) − Ψ(M) ||2 , where
(tr)
(tr)
(M)
(M)
(tr)
Ψ
, [Ψ1 , Ψ0 ] and Ψ(M) , [Ψ1 , Ψ0 ] are the CFC
statistics of the trusted and the malicious sensor, respectively.
a Pf a +a01 P d
= 12
A typical figure of e(ϕ) when the condition 10 a10
+a01
holds is shown in Fig. 1. As can be seen, {1, 1} is the only
blind spot of the CFC. In contrast, the conventional FC only
enforces a linear constraint (6) on the attacker, thus forming
a blind line as indicated in Fig. 1.
Pd = 0.9, Pfa = 0.1, a01 = 0.2, a10 = 0.2
e(ϕ) = ||Ψ(tr) − Ψ(M) ||2
The linear constraint
enforced by the frequency check
0.6
0.5
0.4
0.3
(tr)
t=1
0.2
0
1
0.5
ϕ01
0
0
0.2
0.4
ϕ10
Typical graph of e(ϕ) when the condition
0.8
0.6
t
t
a10 Pf a +a01 P d
a10 +a01
1
=
1
2
Definition 2: For any sensor, two histogram estimators for
Ψ1 and Ψ0 are defined as:
!
!
T
−1
−1
. TX
X
Ψ̂1 ,
(12)
δrt+1 ,1 δrt ,1
δrt ,1 ,
t=1
Ψ̂0 ,
(tr)
Compute Ψ̂1
and Ψ̂0
for the trusted sensor according
to (12) and (13).
for sensor i do
(i)
(i)
Compute Ψ̂1 and Ψ̂0 according to (12) and (13).
if ||Ψ̂(tr) − Ψ̂(i) ||2 > βCF C then
Classify sensor i as malicious.
end if
end for
be expected because of the high local inference flipping
probability at the malicious sensor. Based on this observation,
sensor i will be identified as malicious if dh (i, tr) is greater
than a pre-specified threshold βHDC .
0.1
Fig. 1.
holds.
Algorithm 1 The CFC procedure
B. The Hamming Distance Check
As shown in Fig. 1, the CFC fails to detect the malicious
a Pf a +a01 P d
= 21 . This may
sensor using ϕ = {1, 1} when 10 a10
+a01
happen when a10 = a01 and Pd + Pf a = 1. However, in
this case, a large normalized hamming distance between the
report sequences from a malicious sensor i and the trusted
T
P
sensor, which is defined as dh (i, tr) , T1
δr(i) ,r(tr) , will
0.8
0.7
to the same value, i.e., Ψ(tr) . On the other hand, the CFC
statistics of any malicious sensor will converge to some value
Ψ(M) (depending on its ϕ), which is different from Ψ(tr)
according to Proposition 2. Therefore, any sensor whose CFC
statistics differs from that of the trusted sensor is malicious.
In practice, the values of the two CFC statistics between
any two honest sensors may be different due to finite detection
window length T . For this concern, only when the difference
between the CFC statistics of a sensor and those of the trusted
sensor is larger than a pre-specified threshold βCF C , will this
sensor be identified as malicious. The proposed CFC procedure
with threshold βCF C is summarized in Algorithm 1.
T
−1
X
t=1
t=1
δrt+1 ,0 δrt ,0
!
.
T
−1
X
t=1
δrt ,0
!
,
(13)
respectively, where δi,j = 1 iff i = j and T is the detection
window length.
Proposition 3: The two estimators Ψ̂1 and Ψ̂0 converge to
Ψ1 and Ψ0 , respectively, as T → ∞.
Proof: The proof is give in the Appendix.
Remark 1: According to Proposition 3, the CFC statistics
of all honest sensors (including the trusted one) will converge
IV. S IMULATIONS
Two different cases are simulated. In both cases Pd = 0.9
0.2
and Pf a = 0.1, but in the first case, A = [ 0.8
0.2 0.8 ], and
0.8
0.2
in the second case, A = [ 0.4 0.6 ]. Thus, the condition
a10 Pf a +a01 P d
= 12 is satisfied in the first case but not in the
a10 +a01
second one. Every malicious sensor randomly selects its own
{ϕ01 , ϕ10 } according to uniform distribution over (0, 1]2 . The
thresholds are set as βCF C = 0.2 and βHDC = 0.3. There
are nH = 8 honest sensors and nM = 13 malicious sensors,
i.e., the malicious sensors dominate the network. The detection
window length is T = 100(time slot). At the fusion center,
the majority voting rule is used.
Simulation results of a typical run of the first case are
shown in Fig. 2–Fig. 4. In particular, by comparing Fig. 2
and Fig. 3, it can be seen that two malicious sensors whose
flipping probabilities ϕ01 and ϕ10 are close to 1 successfully
pass the CFC. However, these two malicious sensors fail to
pass the subsequent HDC. Also, it can be seen by comparing
Fig. 3 and Fig. 4 that there is one malicious user surviving both
CFC and HDC. Further examination reveals that the flipping
probabilities of this malicious user are low: ϕ01 ≈ 0 and
4
0.9
0.9
0.8
0.8
0.7
0.7
0.6
0.6
0.5
0.5
0.4
0.4
0.3
0.2
0.1
Fig. 2.
Truth
1
Ψ0
Ψ0
CFC
1
Honest
Malicious
Trusted
0.3
0.2
0.3
0.4
Ψ1
0.5
0.6
0.7
0.2
0.1
0.8
Malicious sensor detection result using CFC.
Fig. 4.
Honest
Malicious
Trusted
0.2
0.3
True sensor types.
CFC and HDC
0.9
Malicious sensors with flipping
probablities close to 1
Ψ0
0.7
0.6
Miss classified
sensor
0.5
0.4
0.3
0.2
0.1
Fig. 3.
Honest
Malicious
Trusted
0.2
0.3
0.4
Ψ1
0.5
0.6
0.7
0.8
Malicious sensor detection result using CFC and HDC.
Table I summarizes the simulation results over 100 Monte
Carlo runs for both cases. The proposed method achieves
nearly perfect sensing results in both cases, i.e., Pd = 0.9956
and Pf a = 0.0006 in the first case, and Pd = 0.9958 and
Pf a = 0.0008 in the second case, which are significantly
better than the sensing performances of both the single trusted
sensor and that of using all sensors without malicious sensor
detection. Besides, the proposed algorithm also provides high
malicious sensor detection accuracy (η > 95%) in both cases.
TABLE I
AVERAGE PERFORMANCES COMPARISON OVER 100 RUNS .
No detection
0.9448
0.0562
0.9457
0.0550
0.5
0.6
0.7
0.8
A new method consisting of two CFC statistics and an
auxiliary HDC procedure has been proposed in this paper
for malicious user detection under a Markov spectrum model.
By using the two consistent histogram estimators of the CFC
statistics, the proposed method does not require any prior
knowledge of the spectrum and sensing models for malicious
sensor detection. Both theoretical analysis and simulation
results show that the proposed method, with the assistance of
a trusted sensor, can achieve high malicious user detection accuracy, and thus significantly improves collaborative spectrum
sensing performance. The proposed method does not rely on
global decision and thus is effective even when the malicious
sensors dominate the network.
ϕ10 ≈ 0.1. Although this malicious sensor is not detected,
its negative influence on the spectrum sensing result of the
fusion center is negligible.
PdF C (case one)
PfFaC (case one)
η (case one)
PdF C (case two)
PfFaC (case two)
η (case two)
Ψ1
V. C ONCLUSIONS
1
0.8
0.4
Trusted only
0.8982
0.0990
0.9015
0.0994
Proposed
0.9956
0.0006
95.09%
0.9958
0.0008
95.03%
A PPENDIX A
P ROOF OF P ROPOSITION 3
Proof: It can be seen that Ψ̂1 =
is defined as
Xti =
1,
0,
1
n1
n1
P
Xti in which Xti
i=1
if rti +1 = 1, given rti = 1,
if rti +1 = 0, given rti = 1,
(14)
where ti is the time slot for the i-th reported 1 of the
sensor. To prove the convergence of Ψ̂1 , we need to prove
1) E(Ψ̂1 ) = Ψ1 , which is simple to show by noticing that
E(Xt ) = P r(rt+1 = 1|rt = 1) = Ψ1 ; 2) lim V ar(Ψˆ1 ) = 0.
T →∞
In general, Xt ’s are not independent due to the correlation
between the consecutive true spectrum states in the Markov
model. Thus, the central limit theorem can not be applied.
However, we will show the second fact is true by first proving
that the correlation between Xi and Xj (i > j) vanishes as
(i − j) approaches infinity. That is,
lim
(i−j)→∞
E(Xi Xj ) = E(Xi )E(Xj )
(15)
5
Note that
E(Xi Xj )
= P r(ri+1 = 1, rj+1 = 1|ri = 1, rj = 1)
= P r(rj+1 = 1|rj = 1)P r(ri+1 = 1|ri = 1, rj = 1)
= P r(rj+1 = 1|rj = 1)[P r(si+1 = 1|ri = 1, rj = 1)Pd
+P r(si+1 = 0|ri = 1, rj = 1)Pf a ]
and
P r(si+1 = 1|ri = 1, rj = 1) = P r(si+1 = 1|ri = 1)
Note that P r(si+1 = 1|ri = 1) is given as
P r(si+1 = 1|ri = 1)
Pd P r(si+1 = 1, si = 1) + Pf a P r(si+1 = 1, si = 0)
=
Pd P r(si = 1) + Pf a P r(si = 0)
π1 Pd a11 + π0 Pf a a01
=
,
(16)
π1 Pd + π0 Pf a
and P r(si+1 = 1|ri = 1, rj = 1) is given as3
P r(si+1 = 1|ri = 1, rj = 1)
P 2 P r(si+1 = 1, si = 1, sj = 1)
= d
Pd2 P r(si = 1, sj = 1)
+Pd Pf a P r(si+1 = 1, si = 0, sj = 1)
...
+Pd Pf a P r(si = 0, sj = 1)
+Pd Pf a P r(si+1 = 1, si = 1, sj = 0)
...
+Pd Pf a P r(si = 1, sj = 0)
+Pf2a P r(si+1 = 1, si = 0, sj = 0)
...
+Pf2a P r(si = 0, sj = 0)
(11)
(11)
=
π1 (Pd2 pi−j a11 + Pd Pf a (1 − pi−j )a01 )
(11)
(11)
π1 (Pd2 pi−j + Pd Pf a (1 − pi−j ))
(00)
(00)
...
+π0 (Pf2a pi−j a01 + Pd Pf a (1 − pi−j )a11 )
(00)
(00)
+π0 (Pf2a pi−j + Pd Pf a (1 − pi−j ))
(11)
,
(17)
(00)
where pn , P r(sn+j = 1|sj = 1) and pn , P r(sn+j =
0|sj = 0). According to the definition, the following recursive
(11)
relation holds for pn ,
p(11)
= P r(sj+n = 1|sj = 1)
n
= P r(sj+n = 1, sj+n−1 = 1|sj = 1)
+P r(sj+n = 1, sj+n−1 = 0|sj = 1)
(11)
(11)
= a11 pn−1 + a01 (1 − pn−1 ).
(11)
(18)
(00)
Consequently, p∞ = 1−aa1101+a01 . Similarly, we have p∞ =
a10
1−a00 +a10 . Substituting these two expressions into (17), it
3 Note
that
+b
a
... +y
x
is used to represent
a+b
x+y
1
< δ. That is, lim V ar(Ψˆ1 ) = 0. On the other
n1 →∞
hand, for any finite Nδ , n1 > Nδ with probability 1 when
T approaches infinity, which implies lim V ar(Ψˆ1 ) = 0.
T →∞
Therefore, Ψˆ1 converges to Ψ1 . Following the same approach,
it can be shown that Ψ̂0 converges to Ψ0 .
Comparing the two preceding equations, it can be seen that,
to prove (15), it is sufficient to prove
lim
n1 →∞
For any positive δ, ∃ Kδ such that |Cov(Xi , Xj )| < δ/2 when
|i − j| > Kδ due to (15). Also, given Kδ , ∃ Nδ such that
4Kδ < δNδ . Then, for anyh n1> Nδ , we have V ar(Ψˆi
1) =
PP
δ
1
1
Cov(Xi Xj ) ≤ n2 n1 1×2Kδ + 2 ×(n1 −2Kδ ) <
n2
1
i j
δ
2−δ
K
δ + 2
Nδ
E(Xj )E(Xi )
= P r(rj+1 = 1|rj = 1)P r(ri+1 = 1|ri = 1)
= P r(rj+1 = 1|rj = 1)[P r(si+1 = 1|ri = 1)Pd
+P r(si+1 = 0|ri = 1)Pf a ].
(i−j)→∞
can be verified that P r(si+1 = 1|ri = 1, rj = 1) =
π1 Pd a11 +π0 Pf a a01
= P r(si+1 |ri = 1) as i − j approaches
π1 Pd +π0 Pf a
infinity. Therefore (15) holds.
Now, we will use (15) to prove that lim V ar(Ψˆ1 ) = 0.
due to space limitations.
R EFERENCES
[1] I. F. Akyildiz, B. F. Lo, and R. Balakrishnan, “Cooperative spectrum
sensing in cognitive radio networks: A survey,” Physical Communication
(Elsevier) Journal, vol. 4, no. 1, pp. 40–62, Mar. 2011.
[2] G. Baldini, T. Sturman, A. Biswas, R. Leschhorn, G. Gódor, and M.
Street, “Security aspects in software defined radio and cognitive radio
networks: A survey and a way ahead,” IEEE Commun. Surveys Tuts.,
no. 99, pp. 1–25, Apr. 2011.
[3] A. S. Rawat, P. Anand, H. Chen, and P. K. Varshney, “Collaborative
spectrum sensing in the presence of Byzantine attacks in cognitive radio
networks,” IEEE Trans. Signal Process., vol. 59, no. 2, pp. 774–786, Feb.
2011.
[4] R. Chen, J. M. Park, Y. T. Hou, and J. H. Reed, “Toward secure distributed
spectrum sensing in cognitive radio networks,” IEEE Commun. Mag.,
vol. 46, no. 4, pp. 50–55, Apr. 2008.
[5] R. Chen, J. M. Park, and K. Bian, “Robust distributed spectrum sensing
in cognitive radio networks,” Proc. INFOCOM, Phoenix, AZ, May. 2008.
[6] P. Kaligineedi, M. Khabbazian, and V. K. Bhargava, “Malicious user
detection in a cognitive radio cooperative sensing system,” IEEE Trans.
Wireless Commun., vol. 9, no. 8, pp. 2488–2497, Jun. 2010.
[7] W. Wang, H. Li, Y. Sun, and Z. Han, “Securing collaborative spectrum
sensing against untrustworthy secondary users in cognitive radio networks,” EURASIP Journal on Advances in Signal Processing, vol. 2010,
Oct. 2010.
[8] K. Zeng, P. Paweczak, and D. Cabric, “Reputation-based cooperative
spectrum sensing with trusted nodes assistance,” IEEE Commun. Lett.,
vol. 14, no. 3, pp. 226–228, Mar. 2010.
[9] S. Marano, V. Matta, L. Tong, “Distributed detection in the presence of
Byzantine attacks,” IEEE Trans. Signal Process., vol. 57, no. 1, pp. 16–29,
Jan. 2009.
[10] H. Li, and Z. Han, “Catch me if you can: An abnormality detection
approach for collaborative spectrum sensing in cognitive radio networks,”
IEEE Trans. Wireless Commun., vol. 9, no. 11, pp. 3554–3565, Nov. 2010.
[11] F. Adelantado, and C. Verikoukis, “A non-parametric statistical approach
for malicious users detection in cognitive wireless ad-hoc networks,”
Proc. ICC, Kyoto, Japan, Jul. 2011.
[12] A. Vempaty, K. Agrawal, H. Chen, and P. Varshney, “Adaptive learning
of Byzantines’ behavior in cooperative spectrum sensing,” Proc. WCNC,
Quintana Roo, Mexico, May 2011.
[13] D. Zhao, X. Ma, and X. Zhou, “Prior probability-aided secure cooperative spectrum sensing,” Proc. WiCOM, Wuhan, China, Oct. 2011.
[14] S. Li, H. Zhu, B. Yang, C. Chen, and X. Guan, “Believe yourself: A usercentric misbehavior setection scheme for secure collaborative spectrum
sensing,” Proc. ICC, Kyoto, Japan, Jul. 2011.
© Copyright 2026 Paperzz