Robust and secure spectrum sensing in cognitive radio networks

The University of Toledo
The University of Toledo Digital Repository
Theses and Dissertations
2013
Robust and secure spectrum sensing in cognitive
radio networks
Changlong Chen
The University of Toledo
Follow this and additional works at: http://utdr.utoledo.edu/theses-dissertations
Recommended Citation
Chen, Changlong, "Robust and secure spectrum sensing in cognitive radio networks" (2013). Theses and Dissertations. 44.
http://utdr.utoledo.edu/theses-dissertations/44
This Dissertation is brought to you for free and open access by The University of Toledo Digital Repository. It has been accepted for inclusion in Theses
and Dissertations by an authorized administrator of The University of Toledo Digital Repository. For more information, please see the repository's
About page.
A Dissertation
entitled
Robust and Secure Spectrum Sensing in Cognitive Radio Networks
by
Changlong Chen
Submitted to the Graduate Faculty as partial fulfillment of the requirements for the
Doctor of Philosophy Degree in Engineering
Dr. Min Song, Committee Chair
Dr. Mansoor Alam, Committee Member
Dr. Vijay Devabhaktuni, Committee Member
Dr. Mohammed Niamat, Committee Member
Dr. Hong Wang, Committee Member
Dr. Patricia R. Komuniecki, Dean
College of Graduate Studies
The University of Toledo
December 2013
Copyright 2013, Changlong Chen
This document is copyrighted material. Under copyright law, no parts of this
document may be reproduced without the expressed permission of the author.
An Abstract of
Robust and Secure Spectrum Sensing in Cognitive Radio Networks
by
Changlong Chen
Submitted to the Graduate Faculty as partial fulfillment of the requirements for the
Doctor of Philosophy Degree in Engineering
The University of Toledo
December 2013
With wireless devices and applications booming, the problem of inefficient utilization of the precious radio spectrum has arisen. Cognitive radio is a key technology
to improve spectrum utilization. A major challenge in cognitive radio networks is
spectrum sensing, which detects if a spectrum band is being used by a primary user.
Spectrum sensing plays a critical role in cognitive radio networks. However, spectrum
sensing is vulnerable to security attacks from malicious users. Detecting malicious
users is a crucial problem for cognitive radio networks. First, the channel shadowing
and fading result in spatial variability and uncertainty of the PU signal, and hence
the sensing reports among geographically separated secondary users are usually distinct. This makes it easy for malicious users to hide the dishonest sensing reports
under the natural variation of the sensing reports. Second, due to the open and easy
reconfiguration nature of cognitive radio, the cognitive radios are more prone to be
compromised and, once compromised, they are prone to more diverse misbehavior.
This makes the malicious user detection more difficult than finding faulty or misconfigured users whose effects on the cognitive radio networks are more evident and easy
to predict.
We propose a decentralized scheme to detect malicious users in cooperative spectrum sensing. The scheme utilizes spatial correlation of received signal strengths
among secondary users in close proximity. We also propose to use an alternative
iii
mean to make our scheme more robust in malicious user detection. Utilizing alternative mean can filter a portion of outliers (extreme sensing results), thus making the
mean more close to the true value of sensing results from benign secondary users, and
hence increasing detection accuracy. We have also proposed a neighborhood majority
voting approach for the secondary users to decide if a specific user is malicious.
Cooperative spectrum sensing is vulnerable to the spectrum sensing data falsification attack. Specifically, a malicious user can send a falsified sensing report to
mislead other (benign) secondary users to make an incorrect decision on the PU activity. Therefore, detecting the spectrum sensing data falsification attack or identifying
the malicious sensing reports is extremely important for robust cooperative spectrum
sensing. This dissertation proposes a distributed density based detection scheme to
countermeasure the spectrum sensing data falsification attack. Density based detection scheme can effectively exclude the malicious sensing reports from spectrum
sensing data falsification attackers, so that a benign secondary user can effectively detect the PU activity in distributed cooperative spectrum sensing. Moreover, density
based detection scheme can also exclude abnormal sensing reports from ill-functioned
secondary users.
Furthermore, we propose another advanced distributed conjugate prior based detection scheme to defend the spectrum sensing data falsification attack. Conjugate
prior based detection can effectively exclude abnormal sensing reports from both spectrum sensing data falsification attackers and ill-functioned secondary users. With this
scheme, a benign secondary user can effectively detect the PU activity in distributed
cooperative spectrum sensing.
On the other hand, denial of service attack is one of the most serious threats to
cognitive radio networks. By launching denial of service attack over communication
channels, the attacker can severely degrade the network performance. The channel
jamming attack is one of denial of service attacks that are simple to launch, and
iv
difficult to be countermeasured. The jamming attack is a security threat where the
attacker interferes a set of communication channels by injecting a continuous jamming signal or non-continuous short jamming pulses. As a result, the communication
channels either cannot be accessed or the signal to noise ratio in these channels is
heavily deteriorated. We model the jamming and anti-jamming process as a Markov
decision process. With this approach, secondary users are able to avoid the jamming
attack launched by external attackers and therefore maximize the payoff function.
We first use a policy iteration method to solve the problem. However, this approach
is computationally intensive. To decrease the computation complexity, Q-function
is used as an alternate method. Furthermore, we propose an algorithm to solve the
Q-function.
In this dissertation, we propose a malicious user detection scheme, a density based
SSDF detection scheme, a conjugate prior based SSDF detection scheme, and an
anti-jamming algorithm to achieve robust and secure cooperative spectrum sensing
in cognitive radio networks. Performance analysis and simulation results show that
our proposed schemes can achieve very good performance in detecting malicious users,
excluding abnormal sensing reports, and defending the jamming attack, thus improve
spectrum sensing performance in cognitive radio networks.
v
This work is dedicated to my parents, Caixiang Ye and Wensheng Chen,
and my wife, Qiuhong Jia.
Acknowledgments
I would never have been able to finish my dissertation without the guidance, support,
and help from numerous people.
Firstly, I would like to express my deepest gratitude to my advisor, Dr. Min Song,
for his excellent guidance, support, understanding and patience during my Ph. D.
study. With his tremendous helps, I learned how to conduct research deeply and
productively. In addition, he also sets a good career example for me in terms of his
hard work, persistence and high responsibility.
I would like to thank Dr. Chunsheng Xin at Old Dominion University. Dr. Xin
served as my co-advisor during my Ph. D. study at Old Dominion University. He
provided me countless technical discussions on my research. I am often inspired by
his in-depth and helpful comments. I also learn much from his breadth of knowledge,
modest manner as an outstanding researcher.
I would like to thank Dr. Alam, Dr. Devabhaktuni, Dr. Niamat and Dr. Wang for
their serving as my committee members and constantly guiding my research for the
past years.
I would like to thank all my friends in US and China. They are willing to give me
a hand when I need helps and share wonderful moments with me as well.
I would like to thank my parents, parents-in-law and my whole family in China.
They are always supporting and encouraging me on my back for years.
Last but not the least, I would like to thank my wife, Qiuhong, for her unconditional love and support. She is standing by me through the good and bad times.
vii
Contents
Abstract
iii
Acknowledgments
vii
Contents
viii
List of Figures
xi
1 Introduction
1
1.1 Problem Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2
1.2 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3
1.3 Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5
2 Background and Related Work
7
2.1 Cognitive Radio Networks . . . . . . . . . . . . . . . . . . . . . . . .
7
2.2 Cooperative Spectrum Sensing in Cognitive Radio Networks . . . . .
8
2.3 Security in Cognitive Radio Networks . . . . . . . . . . . . . . . . . .
9
3 A Robust Malicious User Detection Scheme in Cooperative Spectrum Sensing
14
3.1 System Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14
3.2 Robust Malicious Users Detection Scheme . . . . . . . . . . . . . . .
16
3.2.1
Spatial Correlation Test . . . . . . . . . . . . . . . . . . . . .
16
3.2.2
Robust Alternative Mean . . . . . . . . . . . . . . . . . . . . .
19
viii
3.2.3
Neighborhood Majority Vote . . . . . . . . . . . . . . . . . . .
20
3.3 Simulation Results . . . . . . . . . . . . . . . . . . . . . . . . . . . .
21
3.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
24
4 Density based and Conjugate Prior based Schemes to Countermeasure Spectrum Sensing Data Falsification (SSDF) Attacks in Cognitive Radio Networks
27
4.1 System Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
28
4.2 Density based SSDF Detection (DBSD) Scheme and Simulation Results 30
4.2.1
Density based SSDF Detection (DBSD) Scheme . . . . . . . .
31
4.2.2
Simulation Results . . . . . . . . . . . . . . . . . . . . . . . .
34
4.3 Conjugate Prior based Detection (CoPD) Scheme and Simulation Results 45
4.3.1
Conjugate Prior based Detection (CoPD) Scheme . . . . . . .
45
4.3.2
Simulation Results . . . . . . . . . . . . . . . . . . . . . . . .
50
4.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
61
5 A Game-Theoretical Anti-Jamming Scheme for Cognitive Radio Networks
62
5.1 System Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
63
5.1.1
Channel Model . . . . . . . . . . . . . . . . . . . . . . . . . .
63
5.1.2
Secondary User Model . . . . . . . . . . . . . . . . . . . . . .
63
5.1.3
Attacker Model . . . . . . . . . . . . . . . . . . . . . . . . . .
64
5.2 A Game-Theoretical Anti-jamming Scheme . . . . . . . . . . . . . . .
65
5.2.1
Game Formulation . . . . . . . . . . . . . . . . . . . . . . . .
65
5.2.2
Policy Iteration Scheme . . . . . . . . . . . . . . . . . . . . .
67
5.2.3
Complexity Analysis . . . . . . . . . . . . . . . . . . . . . . .
68
5.2.4
Q-Function Scheme . . . . . . . . . . . . . . . . . . . . . . . .
69
5.3 Simulation Results . . . . . . . . . . . . . . . . . . . . . . . . . . . .
70
ix
5.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6 Conclusion and Future Research
74
76
6.1 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
76
6.2 Future Research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
79
References
81
x
List of Figures
3-1 False positive rate vs. ϵ with three schemes. . . . . . . . . . . . . . . . .
23
3-2 False negative rate vs. ϵ with three schemes. . . . . . . . . . . . . . . . .
24
3-3 False positive rate vs. ϵ for Algorithm 1 with different percentage of
malicious users. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
25
3-4 False negative rate vs. ϵ for Algorithm 1 with different percentage of
malicious users. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
25
3-5 False positive rate vs. ϵ for Algorithm 1 with different network sizes. .
26
3-6 False negative rate vs. ϵ for Algorithm 1 with different network sizes. .
26
4-1 PU detection success probability versus β, with 15% malicious users for
DBSD. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
36
4-2 PU detection success probability versus β, with 20% malicious users for
DBSD. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
36
4-3 PU detection success probability versus β, with 25% malicious users for
DBSD. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
37
4-4 PU detection success probability versus β, with N = 40 for DBSD. . . .
37
4-5 PU detection success probability versus β, with N = 60 for DBSD. . . .
38
4-6 PU detection success probability versus β, with N = 80 for DBSD. . . .
38
4-7 PU detection success probability versus the percentage of malicious users,
with N = 40 for DBSD. . . . . . . . . . . . . . . . . . . . . . . . . . . .
39
4-8 PU detection success probability versus the percentage of malicious users,
with N = 60 for DBSD. . . . . . . . . . . . . . . . . . . . . . . . . . . .
xi
39
4-9 PU detection success probability versus the percentage of malicious users,
with N = 80 for DBSD. . . . . . . . . . . . . . . . . . . . . . . . . . . .
40
4-10 PU detection success probability versus the percentage of malicious users,
with β = 0.1 for DBSD. . . . . . . . . . . . . . . . . . . . . . . . . . . .
40
4-11 PU detection success probability versus the number of SUs (N ), with
β = 0.1 for DBSD. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
41
4-12 PU detection success probability versus the abnormality factor θ, with
β = 0.025 for DBSD. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
41
4-13 PU detection success probability versus the abnormality factor θ, with
β = 0.05 for DBSD. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
42
4-14 PU detection success probability versus the abnormality factor θ, with
β = 0.1 for DBSD. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
42
4-15 PU detection success probability versus the abnormality factor θ, with
N = 40 for DBSD. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
43
4-16 PU detection success probability versus the abnormality factor θ, with
N = 60 for DBSD. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
43
4-17 PU detection success probability versus the abnormality factor θ, with
N = 80 for DBSD. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
44
4-18 PU detection success probability versus β, with 15% malicious users for
CoPD and DBSD. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
52
4-19 PU detection success probability versus β, with 20% malicious users for
CoPD and DBSD. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
53
4-20 PU detection success probability versus β, with 25% malicious users for
CoPD and DBSD. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
53
4-21 PU detection success probability versus β, with N = 40 for CoPD and
DBSD. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
xii
54
4-22 PU detection success probability versus β, with N = 60 for CoPD and
DBSD. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
54
4-23 PU detection success probability versus β, with N = 80 for CoPD and
DBSD. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
55
4-24 PU detection success probability versus the percentage of malicious users,
with N = 40 for CoPD and DBSD. . . . . . . . . . . . . . . . . . . . . .
55
4-25 PU detection success probability versus the percentage of malicious users,
with N = 60 for CoPD and DBSD. . . . . . . . . . . . . . . . . . . . . .
56
4-26 PU detection success probability versus the percentage of malicious users,
with N = 80 for CoPD and DBSD. . . . . . . . . . . . . . . . . . . . . .
56
4-27 PU detection success probability versus the percentage of malicious users,
with β = 0.1 for CoPD and DBSD. . . . . . . . . . . . . . . . . . . . . .
57
4-28 PU detection success probability versus the number of SUs (N ), with
β = 0.1 for CoPD and DBSD. . . . . . . . . . . . . . . . . . . . . . . . .
57
4-29 PU detection success probability versus the abnormality factor θ, with
β = 0.025 for CoPD and DBSD. . . . . . . . . . . . . . . . . . . . . . . .
58
4-30 PU detection success probability versus the abnormality factor θ, with
β = 0.05 for CoPD and DBSD. . . . . . . . . . . . . . . . . . . . . . . .
58
4-31 PU detection success probability versus the abnormality factor θ, with
β = 0.1 for CoPD and DBSD. . . . . . . . . . . . . . . . . . . . . . . . .
59
4-32 PU detection success probability versus the abnormality factor θ, with
N = 60 for CoPD and DBSD. . . . . . . . . . . . . . . . . . . . . . . . .
59
4-33 PU detection success probability versus the abnormality factor θ, with
N = 60 for CoPD and DBSD. . . . . . . . . . . . . . . . . . . . . . . . .
60
4-34 PU detection success probability versus the abnormality factor θ, with
N = 60 for CoPD and DBSD. . . . . . . . . . . . . . . . . . . . . . . . .
xiii
60
5-1 Jamming probability along the time under static attack. . . . . . . . . .
71
5-2 Jamming probability along the time under random attack. . . . . . . . .
72
5-3 Jamming probability along the time under intelligent attack. . . . . . . .
73
5-4 Jamming probability along the time under static attack. . . . . . . . . .
74
5-5 Jamming probability along the time under random attack. . . . . . . . .
75
5-6 Jamming probability along the time under intelligent attack. . . . . . . .
75
xiv
Chapter 1
Introduction
Cognitive radio networks (CRNs) have been considered as a new paradigm for
future network architecture [1]. Over the past decade, CRNs have been attracting
numerous researchers on a large variety of topics. Due to the unique features of cognitive radio technology, CRNs raise new challenges. Furthermore, with the emergence
of cognitive radio technology, many fundamental problems that were well studied for
traditional wireless networks are being revisited. In general, the topics can be categorized depending on the problems addressed [2]. Along with the participation of
secondary users (SUs) which are allowed to use the available spectrum bands when
primary users (PUs) are idle, one novel problem appears, i.e., how to detect “spectrum hole” accurately and utilize them efficiently. It is referred as dynamic spectrum
access. Many existing works on spectrum sensing target to improve the sensing accuracy [3, 4, 5, 6]. Besides the novel problem mentioned above, some fundamental
problems, such scheduling scheme [7], MAC protocol design [8, 9, 10, 11], capacity
analysis [12, 13], delay performance [14, 15] and etc, have been extensively studied
in the traditional wireless networks. However, the emergence of cognitive radio technology makes those problems more complex and bring them back to the focus of
research. Correspondingly, the related research in CRNs are still in the development.
The work presented in this dissertation attempts to fill some of the gaps. It consists
1
of a malicious user detection scheme; designs of two novel data falsification detection
schemes; and an anti-jamming scheme.
1.1
Problem Statement
The work proposed in this dissertation, as the title suggests, focuses on secure
and robust cooperative spectrum sensing in CRNs. Due to the complex fact that
PUs and SUs coexist in CRNs, SUs can only access spectrum when PUs are absent,
i.e., SUs need to sense the spectrum environment to find an available spectrum band;
also, during the SU packet transmission, SUs need to continue sensing its channel to
detect if a PU signal appears or not. Therefore, spectrum sensing plays an important
role to make an accurate decision whether the spectrum is occupied by PUs or not
[16]. The local spectrum sensing by a single SU is often inaccurate as the channel often
experiences fading and shadowing effects. Therefore, cooperative spectrum sensing,
which exploits the cooperation among multiple SUs, has been proposed to achieve
reliable spectrum sensing [17].
However, cooperative spectrum sensing is vulnerable to security attacks from malicious users [18]. For example, to achieve unfair usage of a spectrum band, a greedy
user can generate a false PU signal to launch the primary user emulation (PUE)
attack [19]. Also, malicious users can manipulate sensing reports in order to disrupt
other SUs’ decision on the PU activity. This type of attack is commonly known as
the spectrum sensing data falsification (SSDF) attack [20]. On the other hand, denial
of service (DoS) attack is one of the most serious threats to CRNs [21]. By launching DoS attack over communication channels, the attacker can severely degrade the
network performance. The channel jamming attack is one of DoS attacks that are
simple to launch, and difficult to be countermeasured. The jamming attack is a security threat where the attacker interferes a set of communication channels by injecting
2
a continuous jamming signal or non-continuous short jamming pulses. As a result,
the communication channels either cannot be accessed or the signal to noise ratio
(SNR) in these channels is heavily deteriorated.
Performance is a significant aspect of evaluating wireless networks. It is well
known that false positive rate, false negative rate, and successful rate are important
performance metrics in traditional networks. However, their analysis are relatively
new in CRNs and still in their infancy. This dissertation primarily focuses on the
aforementioned three metrics in CRNs.
Therefore, the problems addressed in the dissertation can be summarized in the
following.
• A malicious user detection scheme is needed that will improve spectrum sensing
performance.
• It is important to improve the accuracy of detecting PU’s activity under spectrum sensing data falsification attack, thus proactive defense schemes are necessary.
• An anti-jamming scheme is essential for SUs to countermeasure jamming attack.
1.2
Contributions
The main contributions of this dissertation are listed as follows. The first contribution is to propose a novel scheme to detect malicious SUs in cooperative spectrum
sensing in a distributed manner. The scheme exploits spatial correlation of received
signal strengths among secondary users in close proximity. The Moran’s I from [22]
is used to characterize the correlation between different SUs. Moran’s I is sensitive to
global correlation. However, we are more interested in the correlation between each
pair of SUs. Therefore, we modify Moran’s I so that it can accurately reflect the
3
correlation. We also propose to use an alternative mean to make our scheme more
robust in malicious user detection. Utilizing alternative mean can filter a portion
of outliers (extreme sensing results), thus making the mean more close to the true
value of sensing results from benign SUs, and hence increasing detection accuracy.
Each SU conducts spatial correlation test and calculates the alternative mean. The
final decision is reached through a novel neighborhood majority voting scheme. Our
detection scheme requires no apriori knowledge about benign or malicious users. This
property is desirable since the strategy of malicious users maybe unknown and the
behavior of malicious users may change dynamically.
The second contribution is to propose a distributed density based SSDF detection
(DBSD) scheme to countermeasure the SSDF attack. To achieve robust spectrum
sensing, we focus on excluding abnormal sensing reports rather than detecting malicious users. The scheme treats the sensing reports as samples of a random variable,
and then estimates the probability density of the random variable using a technique
known as kernel density estimator. Each sensing report is then tested for the normality. Once a sensing report is deemed as abnormal, this sensing report would be
excluded from decision making on the PU activity. DBSD excludes all abnormal sensing reports, including the sensing reports from both malicious users and ill-functioned
SUs, which improves the success probability to detect the PU activity. We have developed an approach to effectively test the normality of sensing reports.
The third contribution is to propose a distributed conjugate prior based detection
(CoPD) scheme to countermeasure the SSDF attack. CoPD can effectively exclude
abnormal sensing reports from both SSDF attackers and ill-functioned secondary
users. After collecting sensing reports from nearby secondary users, CoPD reconstructs the probability density of the random variable using a technique known as
conjugate prior. Then we test the abnormality of each sensing report using a confidence interval derived from the probability density function. If the test result is
4
abnormal, this sensing report is seen from an attacker or an ill-functioned SU, and
discarded. With this scheme, a benign secondary user can effectively detect the PU
activity in distributed cooperative spectrum sensing.
The fourth contribution focuses on the jamming attack in cognitive radio networks
and we propose a scheme to countermeasure this attack. Compared with the previous
work, the anti-jamming scheme we develop has a low computation complexity, while
achieves a very good throughput. To avoid jamming, SUs proactively hop among
accessible channels. We formulate the jamming-hopping process as a Markov Decision
Process and proposed a Policy Iteration scheme, with a complexity analysis of Policy
Iteration scheme. With this scheme, secondary users are able to avoid the jamming
attack launched by external attackers and therefore maximize the payoff function.
Although this scheme is effective, it may be computationally prohibitive. To reduce
the computation complexity, we propose a game-theoretical anti-jamming (GTAS)
scheme for SUs. Our proposed scheme can achieve a high payoff. In addition, our
scheme has a low probability of being jammed by an attacker.
1.3
Outline
The remainder of this dissertation follows the traditional format. Chapter 2
presents the background of cognitive radio networks, cooperative spectrum sensing,
and current work on security in cognitive radio networks. Chapter 3 presents our current work in detection of malicious secondary users in cooperative spectrum sensing.
Firstly the spatial correlation among nearby secondary users is explored to design a
spatial correlation test. Afterwards, a robust alternative mean is computed, which is
then applied to the spatial correlation test. At last, a neighborhood majority voting
rule is used to reach the final decision. Chapter 4 presents our work in excluding abnormal sensing results in cooperative spectrum sensing. Two schemes, density based
5
SSDF detection (DBSD) and conjugate prior based detection (CoPD), are proposed
in this chapter. Both schemes treat the sensing reports as samples of a random
variable, and then estimates the probability density of the random variable. Each
sensing report is then tested for the normality. Once a sensing report is deemed as
abnormal, this sensing report would be excluded from decision making on the PU
activity. Chapter 5 presents our proposed work in countermeasuring jamming attack
in CRNs. A game-theoretical anti-jamming (GTAS) scheme for secondary users is
proposed. Conclusion and future research are summarized in Chapter 6.
6
Chapter 2
Background and Related Work
This chapter discusses the background of CRNs, cooperative spectrum sensing,
and current research work on security in cognitive radio networks. Due to the space
limit, only the work that are closely related to this dissertation are examined.
This chapter is organized as follows. The state-of-the-art technologies on cognitive
radio network are first briefly introduced in 2.1. Cooperative spectrum sensing in
cognitive radio networks are covered in 2.2. At last, security issues in cognitive radio
networks are reviewed in 2.3.
2.1
Cognitive Radio Networks
Cognitive radio (CR) is a key technology to improve spectrum utilization [23]. It
has been attracting numerous researchers on a large variety of topics. Today’s static
spectrum access (SSA) policy grants a fixed spectrum band to each licensed user for
exclusive access [24]. With the rapidly proliferated wireless services, SSA is exhausting
the radio spectrum and leaves little spectrum for future demands, a problem known as
spectrum scarcity. On the other hand, a large number of licensed spectrum bands are
considerably under-utilized in both time and spatial domains. According to the report
from Federal Communications Commission (FCC), the utilization of the assigned
spectrum is limited from 15% to 85% [25].
7
These issues have motivated the development of dynamic spectrum access (DSA)
policy, allowing SUs to dynamically detect idle licensed bands and temporarily access
them. In other words, SUs are allowed to utilize an idle licensed spectrum band
provided that they withdraw from the band when PUs start using it.
One key technology in implementing DSA is cognitive radio (CR), which is capable of sensing the spectrum environment it operates in, and accordingly changing the
transmission and reception parameters for efficient communication, while avoiding interference to PUs [26]. Two major diversifications of cognitive radio networks are the
infrastructure based (or centralized) and non-infrastructure based (or decentralized
or ad hoc) CRNs [27]. Throughout this dissertation, we consider non-infrastructure
based (decentralized) CRNs.
2.2
Cooperative Spectrum Sensing in Cognitive Radio Networks
A major challenge in cognitive radio networks is spectrum sensing, which is to
detect if a spectrum band is being used by primary users [28]. Spectrum sensing is
quite a challenging problem. This is because a CR cannot have a direct measurement
of the spectrum band between a PU transmitter and a PU receiver. In fact, a CR
even cannot measure if a PU receiver exists, e.g., a TV terminal. Therefore, a CR
usually has to make its decision based on its local measurement of a spectrum band,
which refers to as a channel hereafter. This type of detection is referred to as local
spectrum sensing. Energy detection is a widely used method for spectrum sensing, in
which the SU measures the energy from the PU transmitter to determine the presence
of signal from the PU [29].
Spectrum sensing in CRNs can be generally classified into two categories: local
sensing and cooperative sensing [30]. In local spectrum sensing, each SU indepen8
dently makes a decision on channel availability based on the information collected.
It will then attempt to access a selected channel if there are idle channels; otherwise
it keeps sensing. As discussed earlier, a widely used technique for local sensing is
energy detection. Energy detection has low computational complexity and is easy
to be implemented [31, 32, 33]. However, the local spectrum sensing by a single SU
is often inaccurate as the channel often experiences fading and shadowing effects.
Therefore, cooperative spectrum sensing has been proposed to overcome this problem
[34, 35, 36, 37].
In cooperative sensing, each SU independently performs local spectrum sensing
and make a decision (idle or occupied) for a channel based on the information from all
SUs. Based on how cooperating SUs share the sensing reports in the network, cooperative spectrum sensing can be conducted in two modes: centralized or distributed
[38]. In centralized cooperative spectrum sensing, a fusion center collects sensing
reports from all the SUs, makes a final decision on the PU activity, and disseminates
the decision to all SUs. In contrast, distributed cooperative spectrum sensing does
not rely on a fusion center for making decision. Each SU shares its own sensing report
with other SUs, combines its report with the received ones, and decides whether the
PU is active or not by using a local criterion. Since only local sensing reports are
exchanged, distributed cooperative spectrum sensing is energy-efficient and scalable.
Therefore, distributed cooperative spectrum sensing is more suitable for cognitive
radio networks [39].
2.3
Security in Cognitive Radio Networks
However, distributed cooperative spectrum sensing is vulnerable to security attacks from malicious users, and detecting malicious users is a crucial problem for
cognitive radio networks.
9
Many centralized approaches have been proposed to detect malicious users and
achieve robust spectrum sensing in the literature. A hierarchical structure detection
scheme was proposed in [40]. Secondary users are grouped into cells and multiple low
level cells are grouped into a high level cell. Each user computes an average value
of its received reports. The average value is compared to a threshold, and then it is
flagged as a low-outlier or high-outlier, to determine if the PU signal is active. [41]
proposed a scheme for secure cooperative spectrum sensing. This scheme assumes a
somehow simplified attack strategy, i.e., attackers launch only “always yes” or “always no” attacks. The authors in [42] proposed a cooperative technique to detect PU
under malicious users suppression. Local decisions rather than detected energy are
fused for global decision. Weighted combination is used in fusion center and weighted
coefficients are updated recursively. The complexity of the scheme is low because
the calculation of mean and standard deviation is avoided. In [43], the authors used
shadow-fading correlation-based filters to minimize the effect of abnormal sensing reports in detecting digital TV PUs. The authors in [44] proposed three schemes to
detect malicious users based on outlier detection techniques. These schemes require
some knowledge of the malicious user, e.g., the maximum number of malicious users.
In [45], an onion-peeling approach was proposed to defend against multiple compromised SUs, using a maliciousness suspicious level for each user. In [46], the authors
proposed a double-side abnormality detection scheme for collaborative spectrum sensing. D. Chen et al. [47] studied the performances of three voting schemes: average,
median, and majority voting. It has shown that as the number of users participating
voting becomes large, the error probability of the majority voting approaches zero.
Some studies utilized the majority voting scheme to identify malicious nodes. The
authors in [48] proposed a simple majority vote scheme to identify malicious sensors.
The scheme uses 1/0 decision and if more than half of the votes consider that one
sensor is malicious, then this sensor is deemed as an outlier. This scheme, however,
10
does not work in some scenarios, especially when the number of participating sensors
is small. In [18], T Li et al. proposed a group voting scheme to detect compromised
sensor nodes. Each node is assigned with a weight which is based on the time. The
weights combined with the data transmission quality are used for voting.
Recently, the design of distributed SSDF countermeasure schemes for cognitive
radio networks has received considerable attention. In [49], we proposed a decentralized scheme to detect malicious users which launch the SSDF attack in cooperative
spectrum sensing. The scheme utilizes spatial correlation of received signal strengths
among SUs in close proximity and is based on robust outlier-detection technique. A
neighborhood majority voting approach is used for SUs to decide if a specific user
is malicious. To the best of our knowledge, [50] is the only paper that handles collaborative SSDF attack in a distributed fashion. In [50], each pair of SUs share a
symmetric key. The sensing report includes time stamp, user ID, and the received
signal strength. Before combining other sensing reports, SU does a validation to exclude the sensing reports that are sent from attackers. In this paper, the attackers
are considered as outside attackers which are not authenticated by or associated with
the network. Therefore, even traditional pre-shared key mechanisms can easily point
out the outside attacker. The problem of collaborative SSDF attack has also been
addressed in the literature. In [51], the authors proposed a centralized reputation
based method to limit the error rate in identifying attackers. A rigorous mathematical analysis of detection performance is carried out using the Kullback-Leibler
divergence (KLD). However, a major weakness of the proposed scheme is that it misdetects a large number of benign users as attackers. In [20], the authors proposed
a centralized adaptive reputation based clustering algorithm to defend against both
independent and collaborative SSDF attack. However, the authors did not specify
how the cluster would be updated; after updating the cluster, how to set the new
reputation for each updated cluster; how to set the threshold for reputation, etc..
11
Also, if a whole cluster is deemed as malicious, the false positive rate (the portion of
benign SUs treated as malicious ones) is expected to be quite high.
DoS attack is one of the most serious threats in cognitive radio networks. The
jamming attack is one of DoS attacks that are simple to launch, and difficult to be
countermeasured. The state-of-the-art work on anti-jamming schemes for cognitive
radio networks and traditional wireless systems are summarized as follows. In [52],
Li and Han applied game theory and Markov decision process theory to study the
jamming and anti-jamming problem in cognitive radio networks. For one-stage case,
the jamming and anti-jamming are modeled as a zero-sum game, and the Nash Equilibrium strategy was calculated. For multi-stage case, the problem was modeled as
a stochastic game. The game was analyzed based on the framework of partially observable Markov Decision Process. However, several assumptions of this paper are
strong. For example, the channel availability was assumed perfectly known to both
benign and malicious users, the attacker was assumed always rational, i.e., always
following the best strategy, and the attacker was assumed to know the benign user’s
strategy. In [53], the authors also used game theory to formulate the channel hopping. The problem was formulated as a zero-sum problem. At each time slot, the
SUs observe the spectrum availability, the quality of channel, and the attacker’s strategy. Using the minimax-Q learning method, an SU can learn the optimal policy and
maximizes throughput. In [54], Wang et al. proposed an anti-jamming protocol,
where the sender and the receiver hop to the same set of channels for communication
with a high probability. The network is time-slotted. In each time slot, the sender
chooses a set of channels with high weights to sense, transmits on the detected idle
channels, and waits for ACK. The receiver also chooses a set of channels to receive
packets based on the weights of channels. The weight of a channel is adjusted based
on the channel reward, i.e., whether the sender successfully receives an ACK. In [55],
the authors study the robustness of IEEE802.11 rate adaptation algorithms (RAA)
12
against jamming attacks. The vulnerabilities of RAA and the weakness inherent to
IEEE802.11 MAC and link layer are investigated, e.g. the information packet rate
is overt and the rate selection mechanism is predictable. Algorithms that determine
optimal jamming strategies against RAA for a given jamming budget are proposed.
However, it is not directly applicable to cognitive radio networks since the information
of secondary users, like packet rate, is unknown to the jammer. In [56], the authors
investigate the problem of control channel jamming launched by malicious users in
wireless communication systems. The authors solve the problem based on coding
theory. The goal of this research is to provide the ability to deliver control messages
successfully to all users at least once during a bounded period of time. The authors
use coding to ensure that for each user there would be one key that is unique in one
specific time. Therefore, delivery of control information is guaranteed.
13
Chapter 3
A Robust Malicious User
Detection Scheme in Cooperative
Spectrum Sensing
In this chapter, we propose a decentralized scheme to detect malicious users in
cooperative spectrum sensing. The scheme utilizes spatial correlation of received
signal strengths among secondary users in close proximity and is based on robust
outlier-detection technique. We have also proposed a neighborhood majority voting
approach for the secondary users to decide if a specific user is malicious.
This chapter is organized as follows. Section 3.1 describes the system model.
Section 3.2 presents our detection scheme. Section 3.3 provides the performance
evaluations. Finally, summaries are concluded in Section 3.4.
3.1
System Model
We consider a time-slotted network of one PU and N SUs in cooperative spectrum
sensing. The location of the PU is known to all SUs and all SUs can obtain their own
accurate location information. Only one PU is considered in this study, however, our
14
scheme can be extended to the scenario with multiple PUs.
All SUs use energy detection for local spectrum sensing, and the sensing results
at different SUs are independent. In spectrum sensing, though the hard decision
can decrease the communication overhead, [57] claimed that soft decision combining
sensing results achieves better sensing performance than hard decision. Therefore, the
raw results from local spectrum sensing are sent from each SU. The received signal
strength at each SU in different time slots can be represented as:
Pr (i) = Pt · (d(i))−2 · G(i) · R(i)
(3.1)
where d(i) is the distance from PU to SU i, G(i) is the log-normal shadowing from
the PU to the SU i, and R(i) is the Rayleigh fading from the PU to the SU i. It is
reasonable to assume that the channel bandwidth is much larger than the coherent
bandwidth. Therefore, the effect of Rayleigh fading is negligible.
The network detects malicious users in a decentralized fashion. In every time
slot, each SU monitors the sensing results from its one-hop neighbor SUs. After
identifying the malicious users, the sensing results from malicious users would be
excluded for spectrum sensing. We assume that there exists a secure end-to-end
connection between SUs, i.e., the sensing reports would not be modified by malicious
users.
We assume there are M malicious users in the network and M ≪ N . The objective
of the malicious users is to mislead benign SUs to make an incorrect decision on the
PU’s channel usage activity. To achieve this goal, the malicious users launch spectrum
sensing data falsification (SSDF) attack, i.e., the attackers craft the sensing reports
to incur a significant error in sensing reports. Therefore, the sensing results from
malicious users would not exhibit spatial correlation behavior with other users. We
assume that there is no cooperation among malicious users, and the benign users have
15
no prior information about malicious users.
3.2
Robust Malicious Users Detection Scheme
Our malicious user detection scheme consists of three phases. First, the spatial
correlation among nearby SUs is exploited to design a spatial correlation test. Then a
robust alternative mean is computed, which is then applied to the spatial correlation
test. At last, a neighborhood majority voting rule is used to reach the final decision.
3.2.1
Spatial Correlation Test
In shadow fading environment, sensing results from SUs in close proximity are
highly correlated. We first investigate the spatial correlation component in the received signal strength and then propose a spatial correlation test.
The received signal strength Pr (i) at SU i from the PU’s transmission is expressed
by Eq. (3.1). It is assumed that the multi-path loss is negligible, i.e., R(i). The d(i)
is SU i’s distance to PU. The G(i) is the shadow fading component. In addition, since
the location between the PU and the SU is known, the distance d(i) from the PU to
SU i can be calculated. Moreover, the PU transmission power, Pt , is a constant value
which would not affect the correlation test. Therefore, the shadowing component can
be calculated by
G(i) =
Pr (i) · d(i)2
Pt
(3.2)
For studying the dependencies of received signal strength of SUs at different positions, the Moran’s I is used to measure the spatial correlation. Moran’s I can
evaluate both SU locations and values of SU sensing reports simultaneously. Given a
set of SUs and an associated sensing reports, it evaluates whether these SUs’s sensing
16
reports are clustered, dispersed, or random. The Moran’s I is expressed below
n
I= ·
w
∑ ∑
i
j
wij (G(i) − µ)(G(j) − µ)
∑
2
i (G(i) − µ)
(3.3)
where n is the number of SUs in the neighborhood of SU i, µ is the mean of shadow
fading component from all SUs in the neighborhood of SU i, wij is a matrix of weights,
and w is the sum of all weights. The weight is made inversely proportional to the
distance. Specifically, the weight equals to the reciprocal of the distance between two
SUs, i.e.,
wij =
1
d(i, j)
(3.4)
The Moran’s I examines whether all variables exhibit correlation. However, this
study focuses on whether two variables are correlated, i.e., only one pair is tested
each time. In this special case, n = 2 and w = wii + wjj + wij . Since wii = wjj = 0,
the Moran’s I is changed to
I(i, j) =
2(G(i) − µ)(G(j) − µ)
(G(i) − µ)2 + (G(j) − µ)2
(3.5)
On the other hand, the value of Moran’s I lies in the range between -1 and 1, in
which -1 indicates strong negative correlation, 1 means strong positive correlation,
and 0 means lack of spatial correlation. In cognitive radio networks, since two received
signal strengths would not have negative correlation, the Moran’s I should always be
non-negative. Therefore, the absolute value of I is used, i.e.,
¯ j) = |I(i, j)|
I(i,
(3.6)
One SU will be checked to determine whether it exhibits a proper correlation
behavior with other SUs in close proximity. Therefore, based on the above derivations,
17
the test of two SUs are deemed as spatial correlated would be
v(i, j) =


 −1 if I(i,
¯ j) < ϵ

 1
(3.7)
¯ j) > ϵ
if I(i,
where v(i, j) = −1 means that SU i and SU j are not correlated, while v(i, j) = 1
means that SU i and SU j are correlated, and ϵ is the threshold. Below we will derive
ϵ.
The distribution of Moran’s I is unknown. However, we convert the distribution
of Moran’s I to standard deviation. The expected value and variance of Moran’s I are
given in [58]. However, we need to modify it so that it is consistent with the modified
Moran’s I we used in our research. Therefore the expected value of modified Moran’s
I is
¯ =
E(I)
−1
n̂ − 1
(3.8)
where n̂ = n(1 − 2α) is the number of SUs after trimming, which we will discuss in
next subsection.
The variance of Moran’s I is
¯ =
V ar(I)
n̂A4 − A3 A5
∑ ∑
(n̂ − 1)(n̂ − 2)(n̂ − 3)( i j wij )2
in which
A1 = 2
∑∑
i
A2 =
∑
(2
i
j
∑
j
18
(wij )2
wij )2
(3.9)
∑
n̂ i (G(i) − µ̂(α, α))4
A3 = ∑
( i (G(i) − µ̂(α, α))2 )2
A4 = (n̂2 − 3n̂ + 3)A1 − n̂A2 + 3(
∑∑
i
A5 = A1 − 2n̂A1 + 6(
∑∑
i
wij )2
j
wij )2
j
Based on above derivations, the distribution of Moran’s I is converted to
P rob(
¯
I¯ − E(I)
¯ ) ∼ N (0, 1)
V ar(I)
(3.10)
Therefore, we can set ϵ with different chosen levels based on Eq. (3.10).
¯ j). However, this
The mean of shadow fading component is a key factor of I(i,
value is sensitive to the presence of misconfigured SUs and malicious users, which
would have a detrimental impact on the spatial correlation test. Therefore, a robust
alternative mean can improve the accuracy of the test. Next, we discuss how to
generate a robust mean for the hypothesis test.
3.2.2
Robust Alternative Mean
The mean µ can be simply computed by averaging the sensing reports. However,
this mean is not robust and can be easily manipulated by malicious users. Therefore,
a robust alternative to the sample mean must be developed to filter abnormal sensing
reports. The alternative mean is less influenced by malicious users and closer to the
true mean value of the sensing reports from benign users.
In this chapter, we use the α-trimmed mean [59] as the alternative mean. The
α-trimmed mean, µ̂(α, α), is a robust estimate of sample mean. The shadowing
components, G1 , . . . , Gn , are first sorted, and then an α proportion of the smallest
19
shadowing components and an α proportion of the largest shadowing components are
omitted in the calculation of the mean. It is given as follows:
µ̂(α, α) =
Gr+1 + Gr+2 + · · · + Gn−r−1 + Gn−r
n(1 − 2α)
(3.11)
where Gi is the shadowing component for SU i, n is the total number of SUs in this
set, r = ⌊αn⌋ is the number of SUs that would be trimmed. The µ̂(α, α) is then used
to substitue µ in Eq. (3.5) to compute the Moran’s I, and calculate v(i, j) by Eq.
(3.5)–(3.7).
3.2.3
Neighborhood Majority Vote
After each SU has collected the sensing reports from its immediate neighbors, the
SU calculates the α-trimmed mean µ̂(α, α) and conduct spatial correlation test using
Eq. (3.5)–(3.7) to compute v(i, j), through replacing µ by µ̂(α, α) in Eq. (3.5). The
v(i, j) is the suspicious vote by SU i for SU j, where v(i, j) = −1 indicates that
SU i vote SU j as malicious user. The v(i, j) is then disseminated to the neighbor
users. For instance, SU i calculates v(i, j) for all 1 ≤ j ≤ n using µ̂(α, α), and then
disseminates the vector v(i, j) (1 ≤ j ≤ n) to the neighbor users.
After all SUs send/receive the suspicious votes v(i, j) to/from all other SUs, SUs
use neighborhood majority vote to reach the final decision of which users are malicious. For the simple voting scheme in [48], SU i simply counts the votes from
all SUs. If more than half of the users vote SU j as a malicious user, then SU j is
considered as a malicious user.
However, simply counting the votes sometimes does not work well, especially when
the number of users participate in the voting is small. Due to this limitation, a novel
¯ i) as the trust factor is proposed in this chapter. The
voting approach which uses I(j,
¯ i) is calculated in Eq. (3.6), and is combined with votes from each SU to get a
I(j,
20
Algorithm 1: Malicious Users Detection
Input: ϵ and α
Output: list of malicious users
for all secondary user i = 1 to N do
collect other SU’s sensing reports
calculate G using Eq. (3.2) and sort G
calculate µ̂(α, α) using Eq. (3.11)
conduct spatial correlation test using Eq. (3.7)
for secondary user j = 1 to N (j ̸= i) do
majority voting using Eq. (3.12)
if V < 0 then
user j is malicious
else
user j is benign
end if
end for
end for
final decision.
After combining the trust factors and votes regarding to user j from other SUs, a
final decision can be calculated by
V (i, j) =
n
∑
¯ i) · v(i, j)
I(j,
(3.12)
i=1
¯ i) is the
where V (i, j) is the voting result for SU j from SU i’s perspective, I(j,
Moran’s I of SU i, and v(i, j) is the vote from SU i regarding SU j. If the final result
V (i, j) is less than 0, SU j is considered as a malicious user by SU i. Algorithm 1
describes our proposed malicious user detection scheme.
3.3
Simulation Results
In this section, the performance of the proposed scheme is assessed through
MATLAB-based simulations. The simulation environment is a 5000m × 5000m area
in which one PU is located at the center and N SUs are deployed at random. In the
21
simulation, we let N = 20, 40, or 60. Two performance metrics for simulations are
false positive rate and false negative rate. The false positive rate is the proportion of
benign users deemed as malicious ones, and the false negative rate is the proportion
of malicious user escape the detection. Simulation results are obtained from 10000
rounds of detections.
The first set of simulations investigate the performance of three schemes: 1) a
scheme without trimming a portion of sensing reports using simple majority voting
[48], which is marked as ”unrobust” in Figs. 3-1 and 3-2, 2) a scheme trimming a
portion of sensing reports and using simple majority voting, and 3) Algorithm 1.
The first two schemes do not use Moran’s I as the trust factor when combining the
votes. The number of SUs is 20. The α, which is the portion of sensing reports to
be trimmed, is set to 10%. Total 10% of SUs are malicious, and 10% of SUs are
misconfigured, whose behavior are like malicious users. Fig. 3-1 illustrates the false
positive rates of the three schemes versus threshold ϵ, while Fig. 3-2 shows the false
negative rates versus ϵ.
As is shown in Figs. 3-1 and 3-2, Algorithm 1 achieves best performance. The
reason lies in that Algorithm 1 trims a portion of abnormal sensing reports to
calculate µ. Therefore the spatial correlation test is more accurate, which enhances
the robustness of malicious user detection. Moreover, the majority voting scheme
using Moran’s I as the trust factor also improves performance.
The second set of simulations investigate the false positive rate and false negative
rate under different percentage of malicious users. The number of SUs are 20, and
the percentage of malicious users are set to 10%, 15% and 20%, respectively, and α
is set to 10%. The percentage of misconfigured SUs is 10%. Fig. 3-3 illustrates the
false positive rate versus ϵ, while Fig. 3-4 shows the false negative rate versus ϵ.
As is shown in Figs. 3-3 and 3-4, the network with 10% malicious users achieves
best performance. Two reasons account for this phenomenon. The first reason is
22
14%
Algorithm 1
simple voting
unrobust
12%
false positive rate
10%
8%
6%
4%
2%
0%
0.3
0.35
0.4
0.45
0.5
ε
0.55
0.6
0.65
0.7
Figure 3-1: False positive rate vs. ϵ with three schemes.
straightforward: the percentage of malicious user is the lowest in the three networks.
However, the second reason is more important: the percentage of malicious user and
the trimmed percentage is same, which means that most of abnormal are trimmed.
The third set of simulations investigate the false positive rate and false negative
rate under different network sizes. The number of SUs are set to 20, 40 and 60,
respectively. The percentage of malicious users are 10%, and α is set to 10%. The
percentage of misconfigured SUs is 10%. Fig. 3-5 illustrates the false positive rate
versus ϵ, while Fig. 3-6 shows the false negative rate versus ϵ.
As is shown in Figs. 3-5 and 3-6, the false positive rate and false negative rate are
nearly the same for these three network sizes, i.e., the network size would not affect
the performance of our scheme. This set of simulation proves that our scheme can
achieve good scalability.
23
1.4%
Algorithm 1
simple voting
unrobust
1.2%
false negative rate
1%
0.8%
0.6%
0.4%
0.2%
0%
0.3
0.35
0.4
0.45
0.5
ε
0.55
0.6
0.65
0.7
Figure 3-2: False negative rate vs. ϵ with three schemes.
3.4
Summary
In this chapter, we have analyzed the unique features of cognitive radio networks
and presented the challenges in designing malicious users detection schemes. We
have presented a decentralized scheme for detecting malicious users in cognitive radio
networks. The scheme explores spatial correlation of received signal strengths among
secondary users in close proximity. The decision of a secondary user is malicious
or not is decided by other secondary users through majority vote. False positive
rate and false negative rate are two metrics in designing the algorithm. Performance
evaluations demonstrate that our algorithm can detect malicious users with low false
positive rate and false negative rate.
24
10
9
10%
15%
20%
8
false positive rate
7
6
5
4
3
2
1
0
0.3
0.35
0.4
0.45
0.5
threshold
0.55
0.6
0.65
0.7
Figure 3-3: False positive rate vs. ϵ for Algorithm 1 with different percentage of malicious users.
1.6
10%
15%
20%
1.4
false negative rate
1.2
1
0.8
0.6
0.4
0.2
0
0.3
0.35
0.4
0.45
0.5
threshold
0.55
0.6
0.65
0.7
Figure 3-4: False negative rate vs. ϵ for Algorithm 1 with different percentage of malicious users.
25
5.5%
20
40
60
5%
4.5%
false positive rate
4%
3.5%
3%
2.5%
2%
1.5%
1%
0.5%
0.3
0.35
0.4
0.45
0.5
ε
0.55
0.6
0.65
0.7
Figure 3-5: False positive rate vs. ϵ for Algorithm 1 with different network
sizes.
0.7%
20
40
60
0.6%
false negative rate
0.5%
0.4%
0.3%
0.2%
0.1%
0%
0.3
0.35
0.4
0.45
0.5
ε
0.55
0.6
0.65
0.7
Figure 3-6: False negative rate vs. ϵ for Algorithm 1 with different network
sizes.
26
Chapter 4
Density based and Conjugate Prior
based Schemes to Countermeasure
Spectrum Sensing Data
Falsification (SSDF) Attacks in
Cognitive Radio Networks
In this chapter, we propose two distributed schemes to countermeasure the SSDF
attack in cooperative spectrum sensing, one called density based SSDF detection
(DBSD) and the other called conjugate prior based SSDF detection (CoPD). To
achieve robust spectrum sensing, we focus on excluding abnormal sensing reports
rather than detecting malicious users. Both schemes treat the sensing reports as
samples of a random variable, and then estimate the probability density of the random variable using kernel density estimator and normal inverse chi squared prior,
respectively. Each sensing report is then tested for the normality. Once a sensing
report is deemed as abnormal, this sensing report would be excluded from decision
making on the PU activity. We will first present DBSD and the corresponding sim27
ulation results in 4.2. Then we will present CoPD, and compare the performance
between DBSD and CoPD in 4.3.
This chapter is organized as follows. Section 4.1 describes the system model. Section 4.2 presents density based SSDF detection scheme and the corresponding simulation results. Section 4.3 presents conjugate prior based SSDF detection scheme and
the performance evaluation of both DBSD and CoPD schemes. Finally, summaries
are concluded in Section 4.4.
4.1
System Model
We consider a time-slotted cognitive radio network where PUs, benign SUs, and
attackers (malicious users) coexist. There are total N SUs which collaborate for
distributed spectrum sensing. Without loss of generality, a single PU is considered in
this study. Nevertheless, our scheme can be extended to address multiple PUs.
All SUs use energy detection for local spectrum sensing, and the sensing reports
at different SUs are assumed independent. In spectrum sensing, although the hard
decision can decrease the communication overhead, [57] claimed that soft decision
combining sensing reports achieves better sensing performance than hard decision.
Therefore, the raw results from local spectrum sensing are exchanged among all SUs.
The received signal strength, Pi , at SU i can be expressed as follows [50]:
Pi = Pt − (10αlog10 (di /d0 ) + Gi + Mi )(dB)
(4.1)
where Pt is the transmission power of PU, α is the path loss exponent, di is the
distance from PU to SU i, d0 is the reference distance, Gi is the power loss due
to the log-normal shadowing, and Mi is the multipath fading from the PU to SU
i. We assume d0 = 1 meter in this chapter. Also, the location of PU is assumed
known to all SUs. Each SU also knows its own location information. As a general
28
practice, the power loss due to the log-normal shadowing, Gi , is usually modeled as
a Gaussian random variable with mean 0, and standard deviation σ, which has an
empirical value depending on the surroundings. It is reasonable to assume that the
channel bandwidth is much larger than the coherent bandwidth. Therefore, the effect
of multipath fading Mi is negligible.
To make a decision on the PU activity, each SU collects sensing reports from
its neighbor SUs, uses the proposed DBSD scheme to exclude abnormal reports,
calculates the average value based on the remaining sensing reports, and compares
this value to a PU detection threshold. We assume that there is a reliable and secure
end-to-end connection between SUs, i.e., the communication is error-free and would
not be tampered by attackers. This process repeats for each time slot at each node.
It is important to note that a benign SU’s objective is to exclude abnormal sensing
reports rather than identifying specific attackers.
In this chapter, we assume that there are M inside attackers, i.e., malicious SUs,
in the network, since outside attackers can be effectively excluded from the network
by authentication mechanism. We assumed that M is relatively small compared
with N so that the sensing reports from attackers would not dominate the sensing
reports of benign SUs. The objective of the SSDF attackers is to mislead benign
SUs to make an incorrect decision on the PU activity. To achieve this goal, the
attackers manipulate their sensing reports to mislead benign SUs. Specifically, when
the PU is active, attackers send out sensing reports with small PU signal energy;
in contrast, when the PU is inactive, the attackers send out sensing reports with
high PU signal energy. To avoid being detected by the network, the attackers can
adapt their attack strategies based on the updated information of benign SUs’ sensing
reports and collude with other attackers. It is worthy to note that ill-functioned SUs
may generate incorrect sensing reports due to software or hardware failure. These
sensing reports are harmful to spectrum sensing, and hence should also be excluded.
29
Therefore, we do not differentiate the sensing reports of attackers from the sensing
reports of ill-functioned SUs.
In case of a collaborative SSDF attack, attackers first listen to other benign SUs’
information, exchange their sensing information, and decide their sensing reports
collaboratively. The collaborative attacking strategy we consider here is termed as
Going Against MAjority (GAMA). Each attacker shares its sensing information and
in collaboration with other attackers decides against the majority sensing result. For
example, if two attackers sense the channel as busy and one attacker senses the
channel as idle, all the three attackers report the channel is idle to other SUs. The
idea behind this attacking strategy is that sensing reports of majority attackers should
reflect the actual channel status. Therefore, when the attackers collaborate, they craft
the sensing reports and go against the true channel state. It may help the attackers
manipulate other SUs to make a wrong decision.
4.2
Density based SSDF Detection (DBSD) Scheme
and Simulation Results
In this section, we present a distributed scheme called density based SSDF detection (DBSD) to countermeasure the SSDF attack in cognitive radio networks. In
contrast to other existing SSDF defense schemes whose aim is to point out malicious
users, we focus on excluding abnormal sensing reports. DBSD excludes all abnormal sensing reports, including the sensing reports from both malicious users and
ill-functioned SUs, which improves the success probability to detect the PU activity.
Therefore, DBSD can achieve robust spectrum sensing. In the meantime, we have
developed an approach to effectively test the normality of sensing reports.
30
4.2.1
Density based SSDF Detection (DBSD) Scheme
In this subsection, we describe our SSDF countermeasure scheme DBSD. With
DBSD, after an SU has received the sensing reports from other SUs. These received
sensing reports are treated as random samples of the PU signal received at those
SUs, which can be seen as a random variable as indicated in Eq. (4.1). To develop
a general and robust approach to countermeasure SSDF attacks, we do not assume
any knowledge of the probability density of this random variable. Instead, we use
a technique called kernel density estimation to estimate the probability density of
the received PU signal, based on the random samples (sensing reports). Then we
test the abnormality of each sensing report using a confidence interval derived from
the probability density function. If the test result is abnormal, this sensing report is
seen from an attacker or an ill-functioned SU, and discarded. Next we discuss how
to estimate a probability density based on sensing reports, and how to construct the
confidence interval.
We use a technique called kernel density estimator [60], to estimate the probability
density. We consider an SU that has n neighbors in its direct communication range,
and has received n sensing reports from them. Given n different sensing samples
x1 , . . . , xn , the kernel density estimator, denoted as q(x), is given as follows
1∑ 1
x − xi
q(x) =
K(
)
m
n i=1 h
h
n
(4.2)
where K(·) is a kernel function, and h(xi ) is the bandwidth used for sample xi . In this
chapter, we consider a cognitive radio network in a 2-dimensional plane. Therefore,
we have m = 2.
We use the PU signal energy detected by an SU as the kernel function, i.e., we
let K(·) = Pi . As described in Section 4.1, the power loss due to shadowing fading
can be modeled as a Gaussian random variable, i.e., Gi ∼ N (0, σ 2 ). Therefore, the
31
PU signal energy detected by an SU can be modeled by a Gaussian distribution, i.e.,
Pi ∼ N (µi , σ 2 ). Hence we have
(y−µi )2
1
−
√
K(y) = Pi (y) =
e 2σ2
σ 2π
(4.3)
where µi = Pt − 10αlog10 (di ).
For the ease of description, we let
yi =
x − xi
.
h
Then Eq. (4.2) can be rewritten to
(yi −µi )2
1
1∑
√ e− 2σ2 .
q(x) =
n i=1 h2 σ 2π
n
(4.4)
Since we have used the Gaussian density function as the kernel function as in Eq.
(4.3), the optimal choice of the bandwidth h(·) is given as follows,
(
h=
4σ̂ 5
3n
) 51
,
(4.5)
where n is the number of samples and σ̂ is the standard deviation of the sensing
samples x1 , . . . , xn .
The PU signal energy detected at an SU is dependent on the distance from the
SU to the PU. The mean of the PU signal energy detected at SU i is
µi = Pt − 10αlog10 (di ).
(4.6)
Therefore, the mean of the probability distribution represented by the kernel density estimator in (4.4), denoted as µ, can be calculated as
32
Algorithm 2: Density Based SSDF Detection at an SU
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
13:
Input: β
Output: A list of normal sensing reports in set X
Collect neighbor SUs’ sensing reports, x1 , . . . , xn
Compute the standard deviation σ̂ of samples x1 , . . . , xn
Calculate the bandwidth h using Eq. (4.5)
Calculate µ using Eq. (4.7)
Let X = {x1 , . . . , xn }
for j = 1 to n do
Test sensing report xj using Eq. (4.8)
if test result is abnormal then
X = X\{xj } {sensing report xj (from SU j) is excluded}
end if
end for
n
1 ∑
µ=
µi .
nh2 i=1
(4.7)
As discussed earlier, the power loss due to shadowing fading can be modeled
as a Gaussian random variable. Therefore, the PU signal energy detected by an
SU follows the Gaussian distribution. In other words, the underlying probability
density we are trying to estimate in Eq. (4.4) follows the Gaussian distribution
with mean µ and standard deviation σ. As such, from µ and σ, we can construct a
]
[
(
)
100(1 − β)% confidence interval µ − z β σ, µ + z β σ , where z β is the 1 − β2 quantile
2
2
2
of the standard Gaussian distribution, i.e., Pr(Z ≤ z β ) = 1− β2 , where Z is a standard
2
Gaussian random variable. With this confidence interval, we can test the abnormality
of a sensing report as follows.
T (xi ) =


 normal,
[
]
if xi ∈ µ − z β σ, µ + z β σ

 abnormal, otherwise
2
2
(4.8)
At last, we describe our density based SSDF detection scheme in Algorithm 2.
33
4.2.2
Simulation Results
In this subsection, we evaluate the performance of the proposed DBSD scheme
through simulations. The cognitive radio network is assumed as a circular area with
a radius = 1000 meters. One PU is located at the center and N SUs are deployed at
random locations. In the simulations, the pass loss exponent α is assumed 2, and the
PU transmission power Pt is assumed 20. The standard deviation of the power loss
due to shadowing fading, σ, is assumed 1. The results for using different values for σ
have similar trends and are omitted due to space limit. If SU i is a benign SU, then
the sensing report is generated as a Gaussian random variable with mean µi from Eq.
(4.6) and standard deviation σ. If SU i is a malicious user, then the sensing reports
is generated using an enlarged mean θµi , where θ > 1 is called abnormality factor.
The abnormality factor θ is set as 1.1 in the simulation if not otherwise noted. In
the simulation, we assume that the PU is active. The results of detecting that PU
is not active are similar and omitted due to space limit. The simulation results are
obtained from 10000 rounds of simulations using different seeds. We use the success
probability to detect the PU’s activity as the performance metrics.
Fig. 4-1, 4-2, and 4-3 illustrates the success probability of DBSD to detect the
PU’s activity versus β (the corresponding confidence interval is 100(1 − β)%), with
total 40, 60, and 80 number of SUs, respectively. In this experiment, 15%, 20%,
and 25% of the SUs are simulated as malicious users to launch the SSDF attack.
We can see that when β increases, i.e., when the confidence interval decreases, the
PU detection success probability increases. This is because a narrower confidence
interval excludes more sensing reports as abnormal data and hence the abnormal
sensing reports are more likely excluded. Therefore the decision making on the PU
activity is less impacted by the sensing reports from malicious users. In particular
when β ≥ 0.075, or when we use a 92.5% or narrower confidence interval, the PU
34
detection success probability is close to 1.
Next we examine the PU detection success probability with a fixed number of SUs
(N = 40, N = 60, and N = 80) but different percentages of malicious users. The
results are plotted in Fig. 4-4, 4-5, and 4-6. We can see that the PU detection success
probability has a similar trend as in Fig. 4-1.
Figs. 4-7, 4-8, 4-9, and 4-10 illustrate the PU detection success probability as a
function of the percentage of malicious users, with N = 40, N = 60, N = 80, and
β = 0.1, respectively. The PU detection success probability decreases only slowly
when the percentage of malicious users increases. This indicates that DBSD is a
robust scheme that is resilient to increasing number of malicious users.
The PU detection success probability versus the number of SUs (N ) is plotted in
Fig. 4-11. We can see that with more number of SUs, the PU detection success probability moderately improves. On the other hand, DBSD still has a good performance
even when the number of SUs is small.
At last, we examine the PU detection success probability versus the abnormality
factor θ that is used by malicious users to generate abnormal sensing reports. The
results are plotted in Figs. 4-12 to 4-17. We can see that DBSD is very effective
to countermeasure the SSDF attack, as indicated by the high PU detection success
probability when the abnormality factor θ increases. For instance, when θ = 2, the
PU detection success probability is very close to 1. As a matter of fact, even when θ
is smaller, the PU detection success probability is also very high.
35
1
0.9
0.8
Successful Probability
0.7
0.6
0.5
0.4
0.3
0.2
N=40
N=60
N=80
0.1
0
0.025
0.05
0.075
β
0.1
0.125
Figure 4-1: PU detection success probability versus β, with 15% malicious
users for DBSD.
1
0.9
0.8
Successful Probability
0.7
0.6
0.5
0.4
0.3
0.2
N=40
N=60
N=80
0.1
0
0.025
0.05
0.075
β
0.1
0.125
Figure 4-2: PU detection success probability versus β, with 20% malicious
users for DBSD.
36
1
0.9
0.8
Successful Probability
0.7
0.6
0.5
0.4
0.3
0.2
N=40
N=60
N=80
0.1
0
0.025
0.05
0.075
β
0.1
0.125
Figure 4-3: PU detection success probability versus β, with 25% malicious
users for DBSD.
1
0.9
0.8
Successful Probability
0.7
0.6
0.5
0.4
0.3
0.2
15%
20%
25%
0.1
0
0.025
0.05
0.075
β
0.1
0.125
Figure 4-4: PU detection success probability versus β, with N = 40 for
DBSD.
37
1
0.9
0.8
Successful Probability
0.7
0.6
0.5
0.4
0.3
0.2
15%
20%
25%
0.1
0
0.025
0.05
0.075
β
0.1
0.125
Figure 4-5: PU detection success probability versus β, with N = 60 for
DBSD.
1
0.9
0.8
Successful Probability
0.7
0.6
0.5
0.4
0.3
0.2
15%
20%
25%
0.1
0
0.025
0.05
0.075
β
0.1
0.125
Figure 4-6: PU detection success probability versus β, with N = 80 for
DBSD.
38
N=40
1
0.9
0.8
Successful Probability
0.7
0.6
0.5
0.4
0.3
0.2
β=0.025
β=0.05
β=0.1
0.1
0
10
12.5
15
17.5
Percentage of Malicious Users
20
22.5
Figure 4-7: PU detection success probability versus the percentage of malicious users, with N = 40 for DBSD.
N=60
1
0.9
0.8
Successful Probability
0.7
0.6
0.5
0.4
0.3
0.2
β=0.025
β=0.05
β=0.1
0.1
0
10
12
14
16
18
Percentage of Malicious Users
20
22
Figure 4-8: PU detection success probability versus the percentage of malicious users, with N = 60 for DBSD.
39
N=80
1
0.9
0.8
Successful Probability
0.7
0.6
0.5
0.4
0.3
0.2
β=0.025
β=0.05
β=0.1
0.1
0
10
12
14
16
18
Percentage of Malicious Users
20
22
Figure 4-9: PU detection success probability versus the percentage of malicious users, with N = 80 for DBSD.
β=0.1
1
0.9
0.8
Successful Probability
0.7
0.6
0.5
0.4
0.3
0.2
N=40
N=60
N=80
0.1
0
10
12.5
15
17.5
Percentage of Malicious Users
20
22.5
Figure 4-10: PU detection success probability versus the percentage of malicious users, with β = 0.1 for DBSD.
40
β=0.1
1
0.9
0.8
Successful Probability
0.7
0.6
0.5
0.4
0.3
0.2
15%
20%
25%
0.1
0
20
30
40
50
N
60
70
80
Figure 4-11: PU detection success probability versus the number of SUs (N ),
with β = 0.1 for DBSD.
β=0.025
1
0.9
0.8
Successful Probability
0.7
0.6
0.5
0.4
0.3
0.2
N=40
N=60
N=80
0.1
0
1.2
1.4
1.6
1.8
Abnormality Factor
2
2.2
2.4
Figure 4-12: PU detection success probability versus the abnormality factor
θ, with β = 0.025 for DBSD.
41
β=0.025
1
0.9
0.8
Successful Probability
0.7
0.6
0.5
0.4
0.3
0.2
N=40
N=60
N=80
0.1
0
1.2
1.4
1.6
1.8
Abnormality Factor
2
2.2
2.4
Figure 4-13: PU detection success probability versus the abnormality factor
θ, with β = 0.05 for DBSD.
β=0.1
1
0.9
0.8
Successful Probability
0.7
0.6
0.5
0.4
0.3
0.2
N=40
N=60
N=80
0.1
0
1.2
1.4
1.6
1.8
Abnormality Factor
2
2.2
2.4
Figure 4-14: PU detection success probability versus the abnormality factor
θ, with β = 0.1 for DBSD.
42
N=40
1
0.9
0.8
Successful Probability
0.7
0.6
0.5
0.4
0.3
0.2
β=0.025
β=0.05
β=0.1
0.1
0
1.2
1.4
1.6
1.8
Abnormality Factor
2
2.2
2.4
Figure 4-15: PU detection success probability versus the abnormality factor
θ, with N = 40 for DBSD.
N=60
1
0.9
0.8
Successful Probability
0.7
0.6
0.5
0.4
0.3
0.2
β=0.025
β=0.05
β=0.1
0.1
0
1.1
1.3
1.5
1.7
2
Abnormality Factor
2.3
2.5
Figure 4-16: PU detection success probability versus the abnormality factor
θ, with N = 60 for DBSD.
43
N=80
1
0.9
0.8
Successful Probability
0.7
0.6
0.5
0.4
0.3
0.2
β=0.025
β=0.05
β=0.1
0.1
0
1.2
1.4
1.6
1.8
Abnormality Factor
2
2.2
2.4
Figure 4-17: PU detection success probability versus the abnormality factor
θ, with N = 80 for DBSD.
44
4.3
Conjugate Prior based Detection (CoPD) Scheme
and Simulation Results
In this section, we present another advanced distributed scheme called conjugate
prior based SSDF detection (CoPD) to defend the SSDF attack in cognitive radio networks. Similar to DBSD, we focus on excluding abnormal sensing reports to improve
spectrum sensing performance rather than detecting malicious users. The scheme
treats the sensing reports as samples of a random variable, and then reconstructs
the probability density of the random variable using a technique known as conjugate
prior. Once a sensing report is deemed as abnormal, this sensing report would be
excluded from decision making on the PU activity. Therefore, CoPD can achieve
robust spectrum sensing under SSDF attack.
Both CoPD and DBSD tackle on SSDF attack using the same system models in
cooperative spectrum sensing. DBSD and CoPD treat the sensing reports as samples of a random variable and use different techniques to reconstruct the probability
density function of the random variable. After reconstructing the probability density function, each sensing report from other SUs is then tested for the normality
based on a confidence interval. Therefore, these two SSDF attack defense schemes
are completely comparable. In the simulation, we will compare the spectrum sensing
performance of DBSD and CoPD.
4.3.1
Conjugate Prior based Detection (CoPD) Scheme
In this subsection, we describe our conjugate prior based SSDF countermeasure
scheme. Similar to DBSD, all the received sensing reports are treated as random samples of the PU signal received at those SUs. Also, we do not assume any knowledge
of the probability density of this random variable. Instead of kernel density estima-
45
tion, we use normal inverse chi squared prior to reconstruct the probability density
of the received PU signal. After reconstructing the probability density function of the
received signals, we still construct a confidence interval derived from the probability
density function to test the abnormality of each sensing report. If the test result is
abnormal, it will be discarded. Otherwise, it would be used for spectrum sensing.
Next we discuss how to reconstruct probability density function using normal inverse
chi squared prior.
The likelihood for the Gaussian distribution based on n sensing reports is:
p(Y |µ, σ 2 ) =
n
∏
p(yi |µ, σ 2 )
i=1
n
∏
1 1
1
=
(
) 2 exp{− 2 (yi − µ)2 }
2
2πσ
2σ
i=1
2 −n
2
= (2πσ )
n
1 ∑
exp{− 2
(yi − µ)2 }
2σ i=1
(4.9)
where Y = [y1 , . . . , yn ] is the vector of sensing reports we collect. The empirical mean
and variance are defined as
1∑
ȳ =
yi , and
n i=1
n
1∑
s =
(yi − ȳ).
n i=1
n
2
Therefore, we can rewrite the term in the exponent of Eq. (4.9) as follows
46
n
∑
(yi − µ)2 =
i=1
n
∑
[(yi − ȳ) − (µ − ȳ)]2
i=1
=
n
∑
i=1
(yi − ȳ) +
2
n
∑
(ȳ − µ) − 2
2
i=1
n
∑
(yi − ȳ)(µ − ȳ)
i=1
= ns2 + n(ȳ − µ)2
since
n
∑
(yi − ȳ)(µ − ȳ) = (µ − ȳ)(
i=1
n
∑
yi − nȳ)
i=1
= (µ − ȳ)(nȳ − nȳ) = 0.
Hence the likelihood in Eq. (4.9) becomes
1
1
2 −n
2 exp(−
[ns2 + n(ȳ − µ)2 ])
n (σ )
2σ 2
2π 2
1 n
n
ns2
∝ ( 2 ) 2 exp(− 2 (ȳ − µ)2 ) exp(− 2 ).
σ
2σ
2σ
p(Y |µ, σ 2 ) =
We need to estimate µ and σ based on all the sensing reports Y . Assuming the
Gaussian and inverse Chi-square prior with hyperparameters µ0 , κ0 , ν0 , and σ0 , the
posterior can be written as
47
σ 2 −2 2
)χ (σ |ν0 , σ02 )p(Y |µ, σ 2 )
κ0
ν0
1
= [σ −1 (σ 2 )− 2 +1 exp(− 2 [ν0 σ02 + κ0 (µ0 − µ)2 ])]×
2σ
n
1
[(σ 2 )− 2 exp(− 2 [ns2 + n(ȳ − µ)2 ])]
2σ
νn
1
∝ σ −3 (σ 2 )− 2 exp(− 2 [νn σn2 + κn (µn − µ)2 ])
2σ
σ 2 −2 2
= N (µ|µn , )χ (σ |νn , σn2 ).
κn
p(µ, σ 2 |Y ) ∝ N (µ|µ0 ,
(4.10)
(4.11)
We denote S0 = ν0 σ02 and Sn = νn σn2 . Grouping the terms inside the exponential
part of Eq. (4.10), we can get
S0 + κ0 (µ0 − µ)2 + ns2 + n(ȳ − µ)2
= (S0 + κ0 µ20 + ns2 + nȳ 2 ) + µ2 (κ0 + n) − 2(κ0 µ0 + nȳ)µ.
Through coefficients matching for µ and σ 2 in Eqs. (4.10) and (4.11), we can have
νn = ν0 + n,
(4.12)
κn = κ0 + n,
(4.13)
κn µn = κ0 µ0 + nȳ, and
(4.14)
Sn + κn µ2n = S0 + κ0 µ20 + ns2 + nȳ 2 .
(4.15)
From Eq. (4.12)–(4.15), we can get
κn = κ0 + n,
48
µn =
κ0 µ0 + nȳ
κ0 µ0 + nȳ
=
,
κn
κ0 + n
νn = ν0 + n, and
∑
1
nκ0
(ν0 σ02 +
(yi − ȳ)2 +
(µ0 − ȳ)2 )
νn
κ0 + n
i
∑
nκ0
1
(yi − ȳ)2 +
=
(ν0 σ02 +
(µ0 − ȳ)2 ).
ν0 + n
κ0 + n
i
σn2 =
Finally, the parameters µ and σ of the random variable representing the PU signal
are
µ = µn
σ2 =
(4.16)
νn
σ2
νn − 2 n
(4.17)
From µ and σ, we can construct a 100(1−β)% confidence interval [µ−z β σ, µ+z β σ],
2
2
(
)
β
where z β is the 1 − 2 quantile of the standard Gaussian distribution, i.e., Pr(Z ≤
2
z β ) = 1 − β2 , where Z is a standard Gaussian random variable. With this confidence
2
interval, we can test the abnormality of a sensing report as follows.
T (yi ) =


 normal,
[
if yi ∈ µ − z β σ, µ + z β σ

 abnormal, otherwise
2
]
2
At last, we describe the CoPD detection scheme in Algorithm 3.
49
(4.18)
Algorithm 3: Conjugate Prior based SSDF Detection (CoPD) at an SU
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
Input: Sensing report set Y and β
Output: A list of normal sensing reports in set Y
Collect neighbor SUs’ sensing reports, y1 , . . . , yn
Compute the µ and σ of samples y1 , . . . , yn using Eq. (4.16) and Eq. (4.17)
Let Y = {y1 , . . . , yn }
for j = 1 to n do
Test sensing report yj using Eq. (4.18)
if test result is abnormal then
Y = Y \{yj } {sensing report yj (from SU j) is excluded}
end if
end for
4.3.2
Simulation Results
In this subsection, we will evaluate the performance of DBSD and CoPD through
simulations. To compare the spectrum sensing performance of CoPD with “DBSD”,
we use the completely same simulation environment in 4.2.2: The cognitive radio
network is assumed as a circular area with a radius = 1000 meters. One PU is located
at the center and N SUs are deployed at random locations. In the simulations, the
pass loss exponent α is assumed 2, and the PU transmission power Pt is assumed 20.
The standard deviation of the power loss due to shadowing fading, σ, is assumed 1.
The results for using different values for σ have similar trends and are omitted due to
space limit. If SU i is a benign SU, then the sensing report is generated as a Gaussian
random variable with mean µi from Eq. (4.6) and standard deviation σ. If SU i is
a malicious user, then the sensing reports is generated using an enlarged mean θµi ,
where θ > 1 is called abnormality factor. The abnormality factor θ is set as 1.1 in
the simulation if not otherwise noted. In the simulation, we assume that the PU is
active. The results of detecting that PU is not active are similar and omitted due
to space limit. The simulation results are obtained from 10000 rounds of simulations
using different seeds. We use the success probability to detect the PU’s activity as
50
the performance metrics.
Fig. 4-18, 4-19, and 4-20 illustrates the success probability of CoPD to detect the
PU’s activity versus β (the corresponding confidence interval is 100(1 − β)%), with
total 40, 60, and 80 number of SUs, respectively. In this experiment, 15%, 20%,
and 25% of the SUs are simulated as malicious users to launch the SSDF attack.
We can see that when β increases, i.e., when the confidence interval decreases, the
PU detection success probability increases. This is because a narrower confidence
interval excludes more sensing reports as abnormal data and hence the abnormal
sensing reports are more likely excluded. Therefore the decision making on the PU
activity is less impacted by the sensing reports from malicious users. Especially when
β ≥ 0.075, or when we use a 92.5% or narrower confidence interval, the PU detection
success probability is close to 1. From the figure, we can see that CoPD achieves
better performance than DBSD.
Next we examine the PU detection success probability with a fixed number of
SUs (N = 80) but different percentages of malicious users. The results are plotted in
Fig. 4-21, 4-22, and 4-23. We can see that the PU detection success probability has a
similar trend as in Fig. 4-18. When β ≥ 0.075, the PU detection success probability
for both both CoPD and DBSD are close to 1. However, when β < 0.1, we can see
that CoPD outperforms DBSD.
Figs. 4-24, 4-24, 4-24, and 4-27 illustrate the PU detection success probability as
a function of the percentage of malicious users, with N = 40, N = 60, N = 80, and
β = 0.1, respectively. The PU detection success probability decreases only slowly
when the percentage of malicious users increases. This indicates that CoPD is a
robust scheme that is resilient to increasing number of malicious users.
The PU detection success probability versus the number of SUs (N ) is plotted in
Fig. 4-28. We can see that with more number of SUs, the PU detection success probability moderately improves. On the other hand, CoPD still has a better performance
51
1
0.95
0.9
Successful Probability
0.85
0.8
0.75
0.7
0.65
0.6
DBSD−N=40
DBSD−N=60
CoPD−N=40
CoPD−N=60
0.55
0.5
0.025
0.05
0.075
β
0.1
0.125
Figure 4-18: PU detection success probability versus β, with 15% malicious
users for CoPD and DBSD.
than DBSD even when the number of SUs is small.
At last, we examine the PU detection success probability versus the abnormality
factor θ that is used by malicious users to generate abnormal sensing reports. The
results are plotted in Figs. 4-29 to 4-34. We can see that CoPD is more effective than
DBSD to countermeasure the SSDF attack, as indicated by the high PU detection
success probability when the abnormality factor θ increases. For instance, when θ = 2,
the PU detection success probability is very close to 1. As a matter of fact, even when
θ is smaller, the PU detection success probability is still very high.
52
1
0.95
0.9
Successful Probability
0.85
0.8
0.75
0.7
0.65
0.6
DBSD−N=40
DBSD−N=60
CoPD−N=40
CoPD−N=60
0.55
0.5
0.025
0.05
0.075
β
0.1
0.125
Figure 4-19: PU detection success probability versus β, with 20% malicious
users for CoPD and DBSD.
1
0.95
0.9
Successful Probability
0.85
0.8
0.75
0.7
0.65
0.6
DBSD−N=40
DBSD−N=60
CoPD−N=40
CoPD−N=60
0.55
0.5
0.025
0.05
0.075
β
0.1
0.125
Figure 4-20: PU detection success probability versus β, with 25% malicious
users for CoPD and DBSD.
53
1
0.95
0.9
Successful Probability
0.85
0.8
0.75
0.7
0.65
0.6
DBSD−20% malicious users
DBSD−25% malicious users
CoPD−20% malicious users
CoPD−25% malicious users
0.55
0.5
0.025
0.05
0.075
β
0.1
0.125
Figure 4-21: PU detection success probability versus β, with N = 40 for
CoPD and DBSD.
1
0.95
0.9
Successful Probability
0.85
0.8
0.75
0.7
0.65
0.6
DBSD−20% malicious users
DBSD−25% malicious users
CoPD−20% malicious users
CoPD−25% malicious users
0.55
0.5
0.025
0.05
0.075
β
0.1
0.125
Figure 4-22: PU detection success probability versus β, with N = 60 for
CoPD and DBSD.
54
1
0.95
0.9
Successful Probability
0.85
0.8
0.75
0.7
0.65
0.6
DBSD−20% malicious users
DBSD−25% malicious users
CoPD−20% malicious users
CoPD−25% malicious users
0.55
0.5
0.025
0.05
0.075
β
0.1
0.125
Figure 4-23: PU detection success probability versus β, with N = 80 for
CoPD and DBSD.
N=40
1
0.95
0.9
Successful Probability
0.85
0.8
0.75
0.7
0.65
0.6
DBSD−β=0.025
DBSD−β=0.05
CoPD−β=0.025
CoPD−β=0.05
0.55
0.5
10
12
14
16
18
Percentage of Malicious Users
20
22
Figure 4-24: PU detection success probability versus the percentage of malicious users, with N = 40 for CoPD and DBSD.
55
N=60
1
0.95
0.9
Successful Probability
0.85
0.8
0.75
0.7
0.65
0.6
DBSD−β=0.025
DBSD−β=0.05
CoPD−β=0.025
CoPD−β=0.05
0.55
0.5
10
12
14
16
18
Percentage of Malicious Users
20
22
Figure 4-25: PU detection success probability versus the percentage of malicious users, with N = 60 for CoPD and DBSD.
N=80
1
0.95
0.9
Successful Probability
0.85
0.8
0.75
0.7
0.65
0.6
DBSD−β=0.025
DBSD−β=0.05
CoPD−β=0.025
CoPD−β=0.05
0.55
0.5
10
12
14
16
18
Percentage of Malicious Users
20
22
Figure 4-26: PU detection success probability versus the percentage of malicious users, with N = 80 for CoPD and DBSD.
56
β=0.1
1
0.95
0.9
Successful Probability
0.85
0.8
0.75
0.7
0.65
0.6
DBSD−N=40
DBSD−N=60
CoPD−N=40
CoPD−N=60
0.55
0.5
10
12
14
16
18
Percentage of Malicious Users
20
22
Figure 4-27: PU detection success probability versus the percentage of malicious users, with β = 0.1 for CoPD and DBSD.
β=0.1
1
0.95
0.9
Successful Probability
0.85
0.8
0.75
0.7
0.65
0.6
DBSD−20% malicious users
DBSD−25% malicious users
CoPD−20% malicious users
CoPD−25% malicious users
0.55
0.5
20
30
40
50
N
60
70
80
Figure 4-28: PU detection success probability versus the number of SUs (N ),
with β = 0.1 for CoPD and DBSD.
57
β=0.025
1
0.95
0.9
Successful Probability
0.85
0.8
0.75
0.7
0.65
0.6
DBSD−N=40
DBSD−N=60
CoPD−N=40
CoPD−N=60
0.55
0.5
1.2
1.4
1.6
1.8
Abnormality Factor
2
2.2
2.4
Figure 4-29: PU detection success probability versus the abnormality factor
θ, with β = 0.025 for CoPD and DBSD.
β=0.05
1
0.95
0.9
Successful Probability
0.85
0.8
0.75
0.7
0.65
0.6
DBSD−N=40
DBSD−N=60
CoPD−N=40
CoPD−N=60
0.55
0.5
1.2
1.4
1.6
1.8
Abnormality Factor
2
2.2
2.4
Figure 4-30: PU detection success probability versus the abnormality factor
θ, with β = 0.05 for CoPD and DBSD.
58
β=0.1
1
0.95
0.9
Successful Probability
0.85
0.8
0.75
0.7
0.65
0.6
DBSD−N=40
DBSD−N=60
CoPD−N=40
CoPD−N=60
0.55
0.5
1.2
1.4
1.6
1.8
Abnormality Factor
2
2.2
2.4
Figure 4-31: PU detection success probability versus the abnormality factor
θ, with β = 0.1 for CoPD and DBSD.
N=60
1
0.95
0.9
Successful Probability
0.85
0.8
0.75
0.7
0.65
0.6
DBSD−β=0.025
DBSD−β=0.05
CoPD−β=0.025
CoPD−β=0.05
0.55
0.5
1.2
1.4
1.6
1.8
Abnormality Factor
2
2.2
2.4
Figure 4-32: PU detection success probability versus the abnormality factor
θ, with N = 60 for CoPD and DBSD.
59
N=60
1
0.95
0.9
Successful Probability
0.85
0.8
0.75
0.7
0.65
0.6
DBSD−β=0.025
DBSD−β=0.05
CoPD−β=0.025
CoPD−β=0.05
0.55
0.5
1.2
1.4
1.6
1.8
2
Abnormality Factor
2.2
2.4
Figure 4-33: PU detection success probability versus the abnormality factor
θ, with N = 60 for CoPD and DBSD.
N=60
1
0.95
0.9
Successful Probability
0.85
0.8
0.75
0.7
0.65
0.6
DBSD−β=0.025
DBSD−β=0.05
CoPD−β=0.025
CoPD−β=0.05
0.55
0.5
1.2
1.4
1.6
1.8
Abnormality Factor
2
2.2
2.4
Figure 4-34: PU detection success probability versus the abnormality factor
θ, with N = 60 for CoPD and DBSD.
60
4.4
Summary
In this chapter, we have proposed a density based SSDF detection (DBSD) scheme
and a conjugate prior based SSDF detection (CoPD) scheme to achieve robust spectrum sensing in cognitive radio networks. Both CoPD and DBSD tackle on the same
security problem, i.e., spectrum sensing data falsification attack, in cooperative spectrum sensing. Specifically, DBSD and CoPD exclude abnormal sensing reports in
cooperative spectrum sensing, to prevent malicious users to mislead other secondary
users in detection of the PU activity. Simulation results indicate that using the
proposed DBSD and CoPD schemes, secondary users can achieve a very good performance in cooperative spectrum sensing. On the other hand, we have developed
an approach to effectively test the normality of sensing reports. From the simulation
results, we can see that when the abnormality factor is small, DBSD outperforms
CoPD in PU detection rate. However, with the increase of abnormality factor, CoPD
achieves better spectrum sensing performance than DBSD.
61
Chapter 5
A Game-Theoretical
Anti-Jamming Scheme for
Cognitive Radio Networks
In this chapter, we focus on the jamming attack in cognitive radio networks and
propose a scheme to countermeasure this attack. We formulate the jamming-hopping
process as a Markov Decision Process and propose a Policy Iteration scheme, with
a complexity analysis of Policy Iteration scheme. Although this scheme is effective,
it may be computationally prohibitive. To reduce the computation complexity, we
propose a game-theoretical anti-jamming (GTAS) scheme for SUs. We demonstrate
that our proposed scheme can achieve a high payoff through extensive simulations.
This chapter is organized as follows. Section 5.1 describes the system model.
Section 5.2 proposes our anti-jamming scheme. Section 5.3 presents simulation results.
At last, summaries are concluded in Section 5.4.
62
5.1
System Model
5.1.1
Channel Model
In this chapter, we consider a cognitive radio network with a number of PUs,
where each PU is on a different channel. The spectrum is divided into N channels.
The statistics of each channel is independent of each other. The channel activity is
modeled as a discrete-time Markov process. Each channel has two states: 1) state
0 (idle), indicating that PU is inactive on this channel; 2) state 1 (busy), indicating
that PU is active on the channel. The transition probability from state 0 to state 1
is denoted as p01 , while the transition probability from state 1 to state 0 is denoted
as p10 .
We assume that initially all N channels are idle. The probability that PU is
idle/busy on each channel at different time slots can be calculated recursively.
5.1.2
Secondary User Model
Let H denote the number of SUs in the network. We assume that all SUs operate
in a time-slotted mode. To avoid jamming, SUs periodically change their operation
channels. Each SU uses a small interval at the beginning of each time slot for spectrum
sensing. During this sensing interval, each SU detects the presence of the PU in all of
the N channels and choose up to L accessible channels for communication based on
our anti-jamming scheme. We assume that all SUs can detect whether the channels
they have accessed are jammed by the attacker at the end of each time slot, and every
SU keeps a history of spectrum sensing and successful/jammed communication. Due
to space limit, we will not get into details on how to detect that a channel is jammed,
as this itself is a large topic. The interested reader is referred to [61].
In each time slot, SUs decide whether to transmit on each channel. For each SU,
63
its activity on a channel is one of the following three possibilities: 1) a successful
transmission, 2) an unsuccessful transmission, i.e., being jammed, or 3) no transmission. Since these three activities yield different payoffs, SUs need to choose an action
in each channel that maximizes the system’s expected payoff. This payoff is based on
the history of the channel. A successful packet transmission pays U to SUs for each
channel, where U is the utility for a single channel. While when jamming occurs in a
channel, it costs C to SUs. The payoff for no transmission is always zero. We assume
that the value of the utility and the cost of each channel are constant and the same
for every channel.
5.1.3
Attacker Model
Without loss of generality, we consider one external malicious attacker. This
attacker is not authenticated by and associated with the cognitive radio network.
The attacker can launch a jamming attack to degrade spectrum utilization. Due to
the jamming attack, a channel either cannot be accessed, or the SNR on this channel
is heavily deteriorated. There are other possible reasons for reducing SNR. However,
we focus on the jamming attack in this chapter. How to distinguish other factors
for the reduced SNR from the jamming attack will be one of our future directions.
Both SUs and the attacker are assumed to follow a time-slotted access scheme. The
attacker’s scheme is characterized as follows. First it chooses a set of channels to
perform spectrum sensing for a certain duration, called as spectrum sensing duration
in this chapter. If it has detected that the PU is active in the channel, or the SUs
do not occupy the channel (no signal on the channel) after the spectrum sensing
duration, the attacker hops to a new set of channels and performs sensing again,
until the attacker finds some channel(s) that are occupied by one or several SUs, and
launches the jamming attack on these channels. Note that while the attacker tries
to find the channels of SU(s) through spectrum sensing and hopping, the SU is also
64
dynamically hopping channels. Therefore, the attacker may not be able to find the
channels occupied by SU(s) within a certain time to launch the jamming attack. In
each time slot, the attacker can jam up to J channels.
5.2
A Game-Theoretical Anti-jamming Scheme
In this section we present GTAS scheme for countermeasuring jamming attacks.
5.2.1
Game Formulation
We formulate the jamming and anti-jamming between the attacker and the SUs as
a game. There are two types of players in this game: SUs and attacker. The strategy
set of this game denotes the set of channels for access, which is finite. Therefore,
according to [62], this game has a Nash equilibrium. Moreover, in our model, the 2player game is a zero-sum game, since the gain of one player results in a corresponding
loss of the other player.
Let s denote the state of all channels in a time slot k, indicating whether each
channel is accessible by SUs. Let S denote the set of all possible states. Both SUs
and the attacker have the knowledge of the state information since they all sense the
channels at the beginning of each time slot. SUs have a finite action set A(s), which
depends on the state s of the channels. At time slot 0, each SU randomly chooses L
accessible channels for communication. After that, in each time slot, the SU selects
an action from A(s) based on the state s of the channels, e.g., stay on certain channels
or hop to other channels. At a given time slot, let M denote the number of channels
being occupied by PUs. Each SU selects L (L ≤ N − M ) channels for communication
from the N − M accessible channels. Therefore, there are CNL −M possible selections in
total, and for each selection, there is an action. Hence, A(s)={a1 , . . . , ai , . . . , aCNL −M },
where ai indicates the action for the ith selection of L channels.
65
At a given time slot, SUs select an action a based on state s of the channels,
and at the next slot, SUs select another action a′ based on state s′ of the channels
in this slot. The probability of transiting from action a in state s to action a′ at
state s′ is denoted as P r(s′ , a′ |s, a). SU’s transition probability depends on the state
s at a time slot. For example, when L = 1, in time slot k, an SU accesses channel
l. In time slot k + 1, if channel l is occupied by PU, the SU must hop to one of
the N − M accessible channels. In this situation, the transition probability of SU is
P r(s′ , a′ |s, a) =
1
.
N −M
On the other hand, if channel l is still accessible in slot k + 1,
we set the transition probability to stay in channel l in slot k + 1 as pstay , i.e., the
SU stays in this channel with P r(s′ , a′ |s, a) = pstay , which means that the SU hops
to another accessible channel with probability P r(s′ , a′ |s, a) = (1 − pstay ) ×
1
.
N −M −1
As described in Section 5.1, each SU selects L channels for communication. The
utility earned by SUs’ successful communication subtracting the cost due to jammed
communication in L channels is the gain for SUs in a time slot, denoted as G(s, ai )
based on the state s of the channels and the action ai . Hence we have G(s, ai ) =
∑L
l=1 (xl (s, a) × U − yl (s, a) × C), where xl (s, a) and yl (s, a) are switching functions,
which depend on the state s and the corresponding action a of SUs: xl (s, a) = 1 when
communication in channel l is successful under action a at state s, while xl (s, a) = 0
when communication in channel l is jammed under action a at state s. Similarly,
yl (s, a) = 1 when communication in channel l is jammed under action a at state s,
while yl (s, a) = 0 when communication in channel l is successful under action a at
state s.
For example, we consider a special condition when there is only one SU in the
network and during each time slot, the SU accesses only one channel. The SU’s
communication can either be successful or jammed. Therefore, G(s, a) = U if the
communication is successful, and G(s, a) = C otherwise.
66
5.2.2
Policy Iteration Scheme
We use Markov Decision Process (MDP) to formulate the anti-jamming process.
An MDP has four components: 1) Finite state set S, 2) Finite action set A, 3)
Transition probability, and 4) Gain. Since the anti-jamming process discussed above
contains these four components, it can be formulated as an MDP. We can solve the
MDP to obtain the optimal strategy.
We define the stationary policy: πi : s → ai , where πi ∈ Π(s) = {π1 , π2 , . . . , πCNL −M }.
Π(s) corresponds to the action set A(s) at state s. For example, given 2 PUs, 4 channels, and 2 SUs, in the first time slot, suppose that the first channel is accessed by an
SU, the second channel is idle, the third channel is occupied by the PU, and the fourth
channel is jammed by the attacker. Then the state s for the four channels is {SU
access, Idle, PU occupy, Attacker jamming}, the action set A(s) = {stay, hop}, the
policy set Π(s) = {channel 1→stay,channel 1→hop, . . .}. Since the communication is
successful for SU at this slot, G(s, a) = U .
Let Vπ (s) denote the value function for π, which is the expected total reward, i.e.,
∞
∑
Vπi (s) = E[
G(sk , ak )|a0 = π(s0 ), s = s0 ]
k=0
= E[G(s, a)|s0 , a0 ] +
∑
P r(s′ , a′ |s, a)· Vπ (s′ )
(5.1)
s′
where G(sk , ak ) is the total gain of SU to choose action ak at state sk since the
beginning slot, a is a specific action at the current state s, G(s, a) is the immediate
∑
gain, and s′ P r(s′ , a′ |s, a)· Vπ (s′ ) is the expected future gain.
The optimal value function has an additional property, i.e., satisfying the Bellman
optimality equation. Hence we have
67
Algorithm 4: Policy Iteration Algorithm
Randomly choose a policy π ∈ Π(s) at time slot 0
Calculate Vπ (s) using Eq. (5.1)
while time slot k ̸= f inal time slot do
for all policy π ′ ∈ Π(s′ ) do {traverse all actions}
Calculate Vπ′ (s′ ) using Eq. (5.1)
if Vπ (s) < Vπ′ (s′ ) then
Vπ (s) = Vπ′ (s′ )
π = π′
end if
end for
policy update: calculate π(s) with updated π using Eq. (5.2)
end while
V∗ (s) = max{G(s, a) +
π
∑
s′
π∗ (s) = arg max{G(s, a) +
P r(s′ , a′ |s, a)· V∗ (s′ )},
∑
π
P r(s′ , a′ |s, a)· V∗ (s′ )}.
(5.2)
s′
We propose a Policy Iteration algorithm, described in Algorithm 4, to solve the
Bellman optimality equation to obtain the optimal policy.
5.2.3
Complexity Analysis
We briefly analyze the computation complexity of Algorithm 4. Inside the while
loop of Algorithm 4, line 5 is the value determination phase and line 11 is the policy improvement phase. In each iteration, for the value determination phase, the
complexity is O(|S|3 ) from solving the linear equations. On the other hand, the
policy improvement can be performed in O(|A||S|2 ) steps. Therefore, in each iteration of Algorithm 4, the cost is O(|A|· |S|2 + |S|3 ). Algorithm 4 will run T time
slots. Based on the above analysis, the computation complexity of Algorithm 4 is
O(|T |· (|A|· |S|2 + |S|3 )).
68
Algorithm 5: Q-learning Algorithm
Initialize: Q(s, a) = 0, G(s, a) = 0, Randomly choose a start action
for k = 0 to f inal time slot do
Calculate the learning rate α
for all actions ai ∈ A(s) do
Calculate Q(s, ai ) using Eq. (5.3)
if Q(s, ai ) > Q(s, a) then
Q(s, a) = Q(s, ai )
a = ai
end if
end for
SU’s action is a
end for
5.2.4
Q-Function Scheme
The policy iteration algorithm may be computationally intensive. Hence, we propose a Q-learning algorithm as an alternative approach to solve the problem. We
adopt the Q function from [63], and it can be approximated as follows,
Qk+1 (s, a) = G(s, a)+
∑
P r(s′ , a′ |s, a) × max{Qk (s′ , a′ )}
= Q(sk , ak )(1 − α) + α × [G(sk+1 , ak+1 )
+ max{Q(sk+1 , ak+1 )}],
(5.3)
where α ∈ [0, 1] is the learning rate. In this chapter, we use a linear learning rate
defined as α =
1
.
k+1
We describe how to calculate the Q function in Algorithm 5.
69
5.3
Simulation Results
In this section, we evaluate the performance of the proposed GTAS scheme through
simulations. We compare the performance of GTAS to two existing schemes under
three different types of jamming attacks. The first existing scheme is called random
scheme, where the SU has no idea of the history of each channel and randomly
chooses one accessible channel at the beginning of each time slot. When choosing the
channel, the SU would not consider the channel condition or the attacker’s strategy.
The second existing scheme is called intelligent scheme, where the SU keeps a history
of accessing/jamming states for all channels, and chooses the channel which is least
jammed. We consider three types of jamming attacks. The first one is called static
attack, where the jammer chooses J accessible channels at the beginning of network
life. The second type of attack is called random attack, where the jammer chooses
J accessible channels at the beginning of each time slot. These two types of attacks
do not have the information of the state history of each channel. The third type of
attack is called intelligent attack. The attacker keeps a history of the channel state,
i.e., jammed or not, and picks the channels that have the highest probabilities of
being jammed. The performance metric we use is the jamming probability, i.e., the
probability that the SU is jammed. The presented results are the average values from
dozens of simulations using different seeds.
First of all, we consider that there is one PU in the network. We assume that the
transition probabilities of each channel from busy to idle and from idle to busy, i.e.,
p10 and p01 , are both equal to 0.5. There are ten channels in the licensed band, i.e.,
N = 10. We first assume that there is one SU in the network, i.e., H = 1. During
each time slot, the SU accesses one channel, i.e., L = 1. We set pstay = 0.37. We
assume that the PU is occupying its channel all the time, i.e., M = 1. The SU stays
in a channel with pstay when the channel is accessible in the next time slot, while
70
0.35
0.3
being jammed rate
0.25
random
Intelligent
GTAS
0.2
0.15
0.1
0.05
0
0
2
4
6
8
10
time
Figure 5-1: Jamming probability along the time under static attack.
hops to each of the other accessible channels with P r(s′ , a′1 |s, a) = 0.63 ×
1
.
N −M
As
we assumed in Section 5.1, there is one attacker and the attacker can jam up to three
channels at each time, i.e., J = 3.
Fig. 5-1 plots the jamming probabilities of the random scheme, intelligent scheme,
and GTAS under the static attack. At the beginning time slots, these three schemes
have similar jamming probabilities. However, when the time progresses, the jamming
probabilities of both the intelligent scheme, and GTAS decrease. In particular, the
jamming probability of GTAS becomes negligible after time slot 8. While the jamming
probability of the random scheme does not change much. This is because the random
scheme does not take any feedback into consideration from previous actions.
The jamming probabilities of the three schemes under the random attack are plotted in Fig. 5-2. Similar to the case under the static attack, the jamming probabilities
of the three schemes are around the same at the beginning. As time elapses, both
71
0.4
0.35
being jammed rate
0.3
0.25
random
Intelligent
GTAS
0.2
0.15
0.1
0.05
0
2
4
6
8
10
time
Figure 5-2: Jamming probability along the time under random attack.
the intelligent scheme and GTAS have lower jamming probabilities. However, the
performance of GTAS is better than that of the intelligent scheme.
Fig. 5-3 illustrates the jamming probabilities of the three schemes under the
intelligent attack. Comparing the results in Fig. 5-3 with the results in Figs 5-1 and
5-2, we can see that the performance of the intelligent scheme under intelligent attack
is worse than the performance under the static attack or random attack. The reason
is that the attacker is also intelligent, i.e., it also learns from the accessing/jamming
history of each channel. Hence the performance of the intelligent scheme degrades.
However, the performance of GTAS is still very good even under the intelligent attack.
Next we consider a different scenario as follows. The transition probabilities of
each channel from busy to idle and from idle to busy are both 0.5. There are 15
channels in the licensed band, i.e., N = 15, and three SUs in the network, i.e., H = 3.
During each time slot, each SU accesses three channels, i.e., L = 3. pstay is still set as
72
0.35
0.3
being jammed rate
0.25
random
Intelligent
GTAS
0.2
0.15
0.1
0.05
0
0
2
4
6
8
10
time
Figure 5-3: Jamming probability along the time under intelligent attack.
0.37. There are two attackers in the network and each attacker can jam up to three
channels at a time, i.e., J = 3.
Fig. 5-4 plots the jamming probabilities of the three schemes. We can see that the
jamming probabilities are higher than the ones in the first scenario shown in Fig. 5-1.
For example, the jamming probability of the random scheme is around 0.385, while in
Fig. 5-1, it is around 0.32. The reason is that there are more jammers in the network.
Therefore, it is more difficult for the SUs to defend the jamming attack. However,
GTAS still achieves best performance. Figs. 5-5 and 5-6 illustrate the simulation
results under the random attack and the intelligent attack. GTAS achieves better
performance than the other two anti-jamming strategies under these two attacks too.
73
0.4
0.35
being jammed rate
0.3
random
Intelligent
GTAS
0.25
0.2
0.15
0.1
0.05
0
2
4
6
8
10
time
Figure 5-4: Jamming probability along the time under static attack.
5.4
Summary
In this chapter, we have discussed the challenges in defending jamming attacks
in cognitive radio networks. During the anti-jamming process, the secondary users
proactively hop among all accessible channels to defend jamming attacks.
This
jamming-hopping process is formulated as a Markov Decision Process. We have presented a game-theoretical anti-jamming scheme for secondary users. The probability
of jamming is chosen as performance metric to assess our proposed algorithm. Performance evaluations demonstrate that our algorithm can achieve higher payoff than
existing approaches and lower the jamming probability.
74
0.4
0.35
being jammed rate
0.3
random
Intelligent
GTAS
0.25
0.2
0.15
0.1
0.05
0
2
4
6
8
10
time
Figure 5-5: Jamming probability along the time under random attack.
0.4
0.35
being jammed rate
0.3
random
Intelligent
GTAS
0.25
0.2
0.15
0.1
0.05
0
2
4
6
8
10
time
Figure 5-6: Jamming probability along the time under intelligent attack.
75
Chapter 6
Conclusion and Future Research
Cognitive radio is viewed as a disruptive technology innovation to improve spectrum efficiency. Spectrum sensing is a major challenge in cognitive radio networks,
and it can be generally classified into two categories: local spectrum sensing and cooperative spectrum sensing. However, local spectrum sensing by a single SU is often
inaccurate as the channel often experiences fading and shadowing effects. Therefore,
cooperative spectrum sensing has been proposed to overcome this problem. However,
cooperative spectrum sensing is vulnerable to security attacks from malicious users.
Thus detecting malicious users is a crucial problem for cognitive radio networks. The
dissertation focuses on improving cooperative spectrum sensing performance in cognitive radio networks under security attacks. The preceding chapters have studied
several crucial aspects of secure and robust spectrum sensing in CRNs. This chapter
will summarize the major contributions in 6.1 and shed light on directions for further
research in 6.2.
6.1
Conclusion
The main contributions presented in this dissertation are listed below:
• A novel distributed scheme to detect malicious users in cooperative spectrum
76
sensing has been proposed. The proposed malicious user detection scheme consists of three phases. First, the scheme exploits spatial correlation of received
signal strengths among secondary users in close proximity to design a spatial
correlation test. The Moran’s I is used to characterize the correlation between
different SUs. Moran’s I is sensitive to global correlation. However, we are more
interested in the correlation between each pair of SUs. Therefore, we modify
Moran’s I so that it can accurately reflect the correlation. Then a robust alternative mean is computed, which is then applied to the spatial correlation
test. Utilizing alternative mean can filter a portion of outliers (extreme sensing
results), thus making the mean more close to the true value of sensing results
from benign SUs, and hence increasing detection accuracy. Each SU conducts
spatial correlation test and calculates the alternative mean. At last, a neighborhood majority voting rule is used to reach the final decision. No apriori
knowledge about benign or malicious users is required for our detection scheme.
This property is desirable since the strategy of malicious users maybe unknown
and the behavior of malicious users may change dynamically.
• A distributed density based SSDF detection (DBSD) scheme to countermeasure
the SSDF attack has been proposed. To achieve robust spectrum sensing, we
focus on excluding abnormal sensing reports rather than detecting malicious
users. The scheme treats the sensing reports as samples of a random variable,
and then estimates the probability density of the random variable using a technique known as kernel density estimator. Each sensing report is then tested
for the normality. Once a sensing report is deemed as abnormal, this sensing
report would be excluded from decision making on the PU activity. DBSD
excludes all abnormal sensing reports, including the sensing reports from both
malicious users and ill-functioned SUs, which improves the success probability
to detect the PU activity. We have developed an approach to effectively test
77
the normality of sensing reports.
• We have proposed a distributed conjugate prior based detection (CoPD) scheme
to countermeasure the SSDF attack. CoPD can effectively exclude abnormal
sensing reports from both SSDF attackers and ill-functioned secondary users.
After collecting sensing reports from nearby secondary users, CoPD reconstructs
the probability density of the random variable using a technique known as conjugate prior. Then we test the abnormality of each sensing report using a
confidence interval derived from the probability density function. If the test result is abnormal, this sensing report is seen from an attacker or an ill-functioned
SU, and discarded. With this scheme, a benign secondary user can effectively
detect the PU activity in distributed cooperative spectrum sensing.
• Jamming attack is a severe threat in cognitive radio networks and we have
proposed a scheme to countermeasure this attack. Compared with the previous
work, the anti-jamming scheme we developed has a low computation complexity,
while achieves a very good throughput. To avoid jamming, SUs proactively hop
among accessible channels. We formulated the jamming-hopping process as
a Markov Decision Process and proposed a Policy Iteration scheme, with a
complexity analysis of Policy Iteration scheme. With this scheme, secondary
users are able to avoid the jamming attack launched by external attackers and
therefore maximize the payoff function. Although this scheme is effective, it
may be computationally prohibitive. To reduce the computation complexity,
we proposed a game-theoretical anti-jamming (GTAS) scheme for SUs. Our
proposed scheme can achieve a high payoff. In addition, our scheme has a low
probability of being jammed by an attacker.
78
6.2
Future Research
In this section, the future research will be discussed. There are a number of ways
to extend current research. The following lists some possible directions for future
research directions.
• Currently, there are three dynamic spectrum access models: 1) interweave DSA
model, in which SU can transmit only on a spectrum band where the PU is not
active, and has to jump onto different bands over time. 2) underlay DSA model,
in which SU can transmit on a spectrum band no matter the PU is active or not,
but at a low power on each band to limit interference. 3) overlay DSA model, in
which SU can transmit on a spectrum band with a large power even when the
PU is active. The interweave DSA model is the one predominantly studied in the
literature, and the de facto standard for DSA. Throughout this dissertation, we
also adopted the interweave DSA model. However, both underlay and overlay
DSA models are promising alternatives to interweave DSA model [64]. So one
possible direction is to extend current spectrum sensing technology while taking
the underlay and overlay DSA models into account. For example, allowing
simultaneous access of spectrum between PUs and SUs would also alleviate the
impact of the PUE attack, which is a serious security problem for the interweave
DSA model.
• Most of the current literature is limited to the theoretical analysis, in which a
couple of strong assumptions are usually taken for granted. For instance, one
fundamental assumption is that PU’s signal pattern follows Gaussian distribution, which is also used in this dissertation. In fact, PU’s signal in an actual
wireless environment is far more complicated. To be more practical, large-scale
experimental implementations are needed to provide scientific benchmarks for
aforementioned issues. Both positive and negative results from implementations
79
are valuable to reflect insights back into the theoretical studies.
80
References
[1] J. Mitola III and G. Q. Maguire Jr, “Cognitive radio: making software radios
more personal,” IEEE Personal Communications, vol. 6, no. 4, pp. 13–18, 1999.
[2] D. Cabric, S. M. Mishra, and R. W. Brodersen, “Implementation issues in spectrum sensing for cognitive radios,” in Proc. 38th Asilomar conference on signals,
systems and computers, vol. 1, 2004, pp. 772–776.
[3] Y. Zhao, M. Song, and C. Xin, “A weighted cooperative spectrum sensing framework for infrastructure-based cognitive radio networks,” Computer Communications, vol. 34, no. 12, pp. 1510–1517, 2011.
[4] L. Ding, T. Melodia, S. N. Batalama, J. D. Matyjas, and M. J. Medley, “Crosslayer routing and dynamic spectrum allocation in cognitive radio adhoc networks,” IEEE Transactions on Vehicular Technology, vol. 59, no. 4, pp. 1969–
1979, 2010.
[5] Z. Quan, S. Cui, and A. H. Sayed, “Optimal linear cooperation for spectrum
sensing in cognitive radio networks,” IEEE Journal on Selected Topics in Signal
Processing, vol. 2, no. 1, pp. 28–40, 2008.
[6] G. Ganesan and Y. Li, “Agility improvement through cooperative diversity
in cognitive radio,” in Proc. IEEE Global Telecommunications Conference
(GLOBECOM), vol. 5, 2005, pp. 5–pp.
[7] S. Shetty, M. Song, C. Xin, and E. Park, “A learning-based multiuser opportunistic spectrum access approach in unslotted primary networks,” in Proc. IEEE In81
ternational Conference on Computer Communications (INFOCOM), 2009, pp.
2966–2970.
[8] Q. Zhao, L. Tong, A. Swami, and Y. Chen, “Decentralized cognitive mac for
opportunistic spectrum access in ad hoc networks: A pomdp framework,” IEEE
Journal on Selected Areas in Communications, vol. 25, no. 3, pp. 589–600, 2007.
[9] Y. Zhao, M. Song, and C. Xin, “Fmac: A fair mac protocol for coexisting cognitive radio networks,” in Proc. IEEE International Conference on Computer
Communications (INFOCOM), 2013.
[10] C. Cordeiro and K. Challapali, “C-mac: A cognitive mac protocol for multichannel wireless networks,” in Proc. 2nd IEEE International Symposium on New
Frontiers in Dynamic Spectrum Access Networks (DySPAN), 2007, pp. 147–157.
[11] S.-Y. Hung, Y.-C. Cheng, E.-K. Wu, and G.-H. Chen, “An opportunistic cognitive mac protocol for coexistence with wlan,” in Proc. IEEE International
Conference on Communications (ICC), 2008, pp. 4059–4063.
[12] S. A. Jafar and S. Srinivasa, “Capacity limits of cognitive radio with distributed
and dynamic spectral activity,” IEEE Journal on Selected Areas in Communications, vol. 25, no. 3, pp. 529–537, 2007.
[13] S. Srinivasa and S. A. Jafar, “Cognitive radio networks: how much spectrum
sharing is optimal?”
in Proc. IEEE Global Telecommunications Conference
(GLOBECOM), 2007, pp. 3149–3153.
[14] Y. Zhao, M. Song, and C. Xin, “Delay analysis for cognitive radio networks
supporting heterogeneous traffic,” in Proc. 8th Annual IEEE Communications
Society Conference on Sensor, Mesh and Ad Hoc Communications and Networks
(SECON), 2011, pp. 215–223.
82
[15] H.-P. Shiang and M. van der Schaar, “Queuing-based dynamic channel selection
for heterogeneous multimedia applications over cognitive radio networks,” IEEE
Transactions on Multimedia, vol. 10, no. 5, pp. 896–909, 2008.
[16] M. Gandetto and C. Regazzoni, “Spectrum sensing: A distributed approach
for cognitive terminals,” IEEE Journal on Selected Areas in Communications,
vol. 25, no. 3, pp. 546–557, 2007.
[17] G. Ganesan and Y. Li, “Cooperative spectrum sensing in cognitive radio networks,” in Proc. 1st IEEE Symposium on New Frontiers in Dynamic Spectrum
Access Networks (DySPAN), 2005, pp. 137–143.
[18] T. Li, M. Song, and M. Alam, “Compromised sensor nodes detection: A quantitative approach,” in Proc. 28th International Conference on Distributed Computing
Systems Workshops, 2008, pp. 352–357.
[19] R. Chen, J. Park, and J. Reed, “Defense against primary user emulation attacks
in cognitive radio networks,” IEEE Journal on Selected Areas in Communications, vol. 26, no. 1, pp. 25–37, 2008.
[20] C. S. Hyder, B. Grebur, and L. Xiao, “Defense against spectrum sensing data
falsification attacks in cognitive radio networks,” in Security and Privacy in
Communication Networks. Springer, 2012, pp. 154–171.
[21] W. Wang, S. Bhattacharjee, M. Chatterjee, and K. Kwiat, “Collaborative jamming and collaborative defense in cognitive radio networks,” Pervasive and Mobile Computing, 2012.
[22] J. Riihijarvi, P. Mahonen, M. Wellens, and M. Gordziel, “Characterization and
modelling of spectrum for dynamic spectrum access with spatial statistics and
random fields,” in Proc. IEEE 19th International Symposium on Personal, Indoor and Mobile Radio Communications (PIMRC), 2008, pp. 1–6.
83
[23] I. F. Akyildiz, W.-Y. Lee, M. C. Vuran, and S. Mohanty, “Next generation/dynamic spectrum access/cognitive radio wireless networks: a survey,”
Computer Networks, vol. 50, no. 13, pp. 2127–2159, 2006.
[24] Y. Xiao and F. Hu, Cognitive radio networks. CRC press, 2008.
[25] P. Kolodzy, “Spectrum policy task force report, fcc 02-155,” 2002.
[26] S. Haykin, “Cognitive radio: brain-empowered wireless communications,” IEEE
Journal on Selected Areas in Communications, vol. 23, no. 2, pp. 201–220, 2005.
[27] E. Peh and Y.-C. Liang, “Optimization for cooperative sensing in cognitive radio
networks,” in Proc. IEEE Wireless Communications and Networking Conference
(WCNC), 2007, pp. 27–32.
[28] W.-Y. Lee and I. F. Akyildiz, “Optimal spectrum sensing framework for cognitive
radio networks,” IEEE Transactions on Wireless Communications, vol. 7, no. 10,
pp. 3845–3857, 2008.
[29] A. Ghasemi and E. Sousa, “Collaborative spectrum sensing for opportunistic
access in fading environments,” in Proc. 1st IEEE Symposium on New Frontiers
in Dynamic Spectrum Access Networks (DySPAN), 2005, pp. 131–136.
[30] W. Zhang, R. K. Mallik, and K. Ben Letaief, “Cooperative spectrum sensing optimization in cognitive radio networks,” in Proc. IEEE International Conference
on Communications (ICC), 2008, pp. 3411–3415.
[31] F. F. Digham, M.-S. Alouini, and M. K. Simon, “On the energy detection of
unknown signals over fading channels,” IEEE Transactions on Communications,
vol. 55, no. 1, pp. 21–24, 2007.
84
[32] P. D. Sutton, K. E. Nolan, and L. E. Doyle, “Cyclostationary signatures in
practical cognitive radio applications,” IEEE Journal on Selected Areas in Communications, vol. 26, no. 1, pp. 13–24, 2008.
[33] V. Turunen, M. Kosunen, A. Huttunen, S. Kallioinen, P. Ikonen, A. Parssinen,
and J. Ryynanen, “Implementation of cyclostationary feature detector for cognitive radios,” in Proc. 4th International Conference on Cognitive Radio Oriented
Wireless Networks and Communications (CROWNCOM), 2009, pp. 1–4.
[34] W. Saad, Z. Han, M. Debbah, A. Hjorungnes, and T. Basar, “Coalitional games
for distributed collaborative spectrum sensing in cognitive radio networks,” in
Proc. IEEE International Conference on Computer Communications (INFOCOM), 2009, pp. 2114–2122.
[35] L. Luo, N. M. Neihart, S. Roy, and D. J. Allstot, “A two-stage sensing technique
for dynamic spectrum access,” IEEE Transactions on Wireless Communications,
vol. 8, no. 6, pp. 3028–3037, 2009.
[36] S. Fazeli-Dehkordy, K. N. Plataniotis, and S. Pasupathy, “Two-stage spectrum
detection in cognitive radio networks,” in Proc. IEEE International Conference
on Acoustics Speech and Signal Processing (ICASSP), 2010, pp. 3118–3121.
[37] Z. Tian and G. B. Giannakis, “A wavelet approach to wideband spectrum sensing
for cognitive radios,” in Proc. 1st International Conference on Cognitive Radio
Oriented Wireless Networks and Communications, 2006, pp. 1–5.
[38] I. Akyildiz, B. Lo, and R. Balakrishnan, “Cooperative spectrum sensing in cognitive radio networks: A survey,” Physical Communication, vol. 4, no. 1, pp.
40–62, 2011.
[39] J. Unnikrishnan and V. V. Veeravalli, “Cooperative sensing for primary detection
85
in cognitive radio,” IEEE Journal on Selected Areas in Signal Processing, vol. 2,
no. 1, pp. 18–27, 2008.
[40] O. Fatemieh, R. Chandra, and C. Gunter, “Secure collaborative sensing for crowd
sourcing spectrum data in white space networks,” in Proc. 4th IEEE Symposium
on New Frontiers in Dynamic Spectrum Access Networks (DySPAN), 2010, pp.
1–12.
[41] P. Kaligineedi, M. Khabbazian, and V. Bhargava, “Secure cooperative sensing
techniques for cognitive radio systems,” in Proc. IEEE International Conference
on Communications (ICC), 2008, pp. 3406–3410.
[42] T. Zhao and Y. Zhao, “A new cooperative detection technique with malicious
user suppression,” in Proc. IEEE International Conference on Communications
(ICC), 2009, pp. 1–5.
[43] A. Min, K. Shin, and X. Hu, “Secure cooperative sensing in ieee 802.22 wrans
using shadow fading correlation,” IEEE Transactions on Mobile Computing,
vol. 10, no. 10, pp. 1434–1447, 2011.
[44] P. Kaligineedi, M. Khabbazian, and V. Bhargava, “Malicious user detection in
a cognitive radio cooperative sensing system,” IEEE Transactions on Wireless
Communications, vol. 9, no. 8, pp. 2488–2497, 2010.
[45] W. Wang, H. Li, Y. Sun, and Z. Han, “CatchIt: detect malicious nodes in collaborative spectrum sensing,” in Proc. IEEE Global Telecommunications Conference
(GLOBECOM), 2009.
[46] H. Li and Z. Han, “Catching Attacker (s) for Collaborative Spectrum Sensing
in Cognitive Radio Systems: An Abnormality Detection Approach,” in Proc.
4th IEEE Symposium on New Frontiers in Dynamic Spectrum Access Networks
(DySPAN), 2010, pp. 1–12.
86
[47] D. Chen and X. Cheng, “An asymptotic analysis of some expert fusion methods,”
Pattern Recognition Letters, vol. 22, no. 8, pp. 901–904, 2001.
[48] F. Liu, X. Cheng, and D. Chen, “Insider attacker detection in wireless sensor
networks,” in Proc. 26th IEEE International Conference on Computer Communications (INFOCOM), 2007, pp. 1937–1945.
[49] C. Chen, M. Song, C. Xin, and M. Alam, “A robust malicious user detection
scheme in cooperative spectrum sensing,” in Proc. IEEE Global Telecommunications Conference (GLOBECOM), 2012.
[50] Q. Yan, M. Li, T. Jiang, W. Lou, and Y. Hou, “Vulnerability and protection for
distributed consensus-based spectrum sensing in cognitive radio networks,” in
Proc. 31st IEEE International Conference on Computer Communications (INFOCOM), 2012, pp. 900–908.
[51] A. S. Rawat, P. Anand, H. Chen, and P. K. Varshney, “Collaborative spectrum
sensing in the presence of byzantine attacks in cognitive radio networks,” IEEE
Transactions on Signal Processing, vol. 59, no. 2, pp. 774–786, 2011.
[52] H. Li and Z. Han, “Dogfight in spectrum: Jamming and anti-jamming in multichannel cognitive radio systems,” in Proc. IEEE Global Telecommunications
Conference (GLOBECOM), 2009, pp. 1–6.
[53] B. Wang, Y. Wu, K. Liu, and T. Clancy, “An anti-jamming stochastic game for
cognitive radio networks,” IEEE Journal on Selected Areas in Communications,
vol. 29, no. 4, pp. 877–889, 2011.
[54] Q. Wang, K. Ren, and P. Ning, “Anti-jamming communication in cognitive radio
networks with unknown channel statistics,” in Proc. IEEE International Conference on Network Protocols (ICNP), 2011, pp. 393–402.
87
[55] G. Noubir, R. Rajaraman, B. Sheng, and B. Thapa, “On the robustness of
ieee802. 11 rate adaptation algorithms against smart jamming,” in Proc. ACM
conference on Wireless network security, 2011, pp. 97–108.
[56] A. Chan, X. Liu, G. Noubir, and B. Thapa, “Broadcast control channel jamming:
Resilience and identification of traitors,” in Proc. IEEE International Symposium
on Information Theory (ISIT 2007), 2007, pp. 2496–2500.
[57] E. Visotsky, S. Kuffner, and R. Peterson, “On collaborative detection of TV
transmissions in support of dynamic spectrum sharing,” in Proc. 1st IEEE International Symposium on New Frontiers in Dynamic Spectrum Access Networks
(DySPAN), 2005, pp. 338–345.
[58] L. Anselin, “Local indicators of spatial associationlisa,” Geographical analysis,
vol. 27, no. 2, pp. 93–115, 1995.
[59] V. Barnett and T. Lewis, Outliers in statistical data. Wiley New York, 1994.
[60] L. Latecki, A. Lazarevic, and D. Pokrajac, “Outlier detection with kernel density
functions,” Machine Learning and Data Mining in Pattern Recognition, pp. 61–
75, 2007.
[61] W. Xu, W. Trappe, Y. Zhang, and T. Wood, “The feasibility of launching and
detecting jamming attacks in wireless networks,” in Proc. 6th ACM international
symposium on Mobile ad hoc networking and computing, 2005, pp. 46–57.
[62] J. Nash, “Non-cooperative games,” The Annals of Mathematics, vol. 54, no. 2,
pp. 286–295, 1951.
[63] M. Lagoudakis and R. Parr, “Value function approximation in zero-sum markov
games,” in Proc. Eighteenth conference on Uncertainty in artificial intelligence.
Morgan Kaufmann Publishers Inc., 2002, pp. 283–292.
88
[64] M. Song, C. Xin, Y. Zhao, and X. Cheng, “Dynamic spectrum access: from
cognitive radio to network radio,” IEEE Wireless Communications, vol. 19, no. 1,
pp. 23–29, 2012.
89