Applications of Distributed Architectures in Software Defined Radio

Applications of Distributed Architectures in
Software Defined Radio and Dynamic Spectrum
Access
Tore Ulversøy
T HESIS IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF
PHILOSOPHIAE DOCTOR
Kjeller, February 10, 2011
Web edition with additional hyperlinks: August 18, 2011
c
Tore
Ulversøy, 2011
Series of dissertations submitted to the
Faculty of Mathematics and Natural Sciences, University of Oslo
No. 1046
ISSN 1501-7710
All rights reserved. No part of this publication may be
reproduced or transmitted, in any form or by any means, without permission.
Abstract
This thesis is a collection of five papers preceded by an overview part which summarizes
them, puts the results reported on into perspective and contains some further related results. The papers relate to three research goals which are all within emerging radio systems
technology: Software Defined Radio (SDR), Cognitive Radio (CR) and Dynamic Spectrum
Access (DSA). The dominant focus is on the intersection between radio technology and
distributed processing technology. The research goals aim to contribute to the evolution
from inflexible transceivers with pre-regulated frequency-use to flexible software-defined
transceivers capable of dynamically exploiting the available radio spectrum.
SDR is frequently viewed as an enabler for Cognitive Radio and Dynamic Spectrum
Access systems, in providing a technology basis for dynamically reconfigurable radio nodes.
However, although it has been an active research topic for nearly two decades, the adoption
of SDR into products has been much slower than anticipated, and many of the initial goals of
SDR have not been met. This situation motivated the first main research goal of analyzing
the conceptual challenges of SDR and the degree to which these still remain challenges.
The results are provided in the first paper. This provides a comprehensive overview of the
challenges and opportunities associated with SDR, along with projections and suggestions
for the way ahead in this field. The paper addresses the architectures, waveform processing,
security, regulations and business models. The paper is the only Software Defined Radio
tutorial in the IEEE Communications Surveys and Tutorials online journal, and could serve
as a starting point for those entering this field of work.
Software Communications Architecture (SCA) is the dominant architecture standard for
SDR and defines a standardized environment for the SDR application components. While
promoting portability and reusability of code components, SCA also causes a processing
overhead. This motivates the second research goal, which investigates this processing overhead in a particular processing environment. The results are provided in the second paper.
Workload measurements are conducted, and two simple models for the workload overhead
are described. The paper shows that for small intercomponent packet sizes and frequent
package transmissions the main overhead effects are related to the data transmission between components. The models and measurement results can be used to provide guidance
on issues such as application granularity, intercomponent communication frequency and intercomponent packet sizes. The paper makes a contribution to part of a research field where
there are few directly related publications.
In follow-up work to this paper, in Part I of this thesis, a walkthrough of various ways of
optimizing SCA-based environments is provided.
The continuing growth in the demand for wireless services, such as for wireless mobile
Internet access, creates an increasing demand for spectrum. Many authors have pointed to
the fact that the radio spectrum can be utilized more efficiently if radio nodes are allowed to
i
access the spectrum in a dynamic manner, rather than just relying on static spectrum assignments. SDRs contribute to the evolution in this direction by providing flexible platforms and
a processing capacity for local processing of spectrum decision algorithms. The concepts
and algorithms for the spectrum decisions and the architectural context in which they are
made are equally challenging questions. This has motivated the third research goal, which is
to contribute to establish practical ways for dynamically sharing the frequency spectrum between radio nodes. In particular, compare autonomous, distributed and centralized spectrum
sharing architectures. The results are provided in the third, fourth and fifth papers.
The third paper outlines a concept in which the spectrum decisions are made by a central
infrastructure. It outlines a hierarchical distributed system of Dynamic Frequency Brokers
(DFBs), that handles spectrum inquiries from radio clients and grants time-limited leases.
In simple terms, the paper proposes replacing parts of the manual regulatory processes by
a full hierarchy of brokers, with national or even higher-level authorities at the top. The
communication between radio nodes is proposed to be based on web services. A list of
security challenges is also included. The conceptual suggestions in the paper may be of use
for regulators or organizations interested in pursuing frequency broker systems.
In the fourth paper, centrally computed global optimum and autonomously computed
competitive optimum spectrum sharing are investigated and compared in a system consisting of simplex radio links. In the analysis it is assumed that the available radio spectrum
can be divided into a number of spectrum segments over which interference and noise are
assumed to have constant power spectrum density. It is further assumed that the link bit rates
in each segment can be calculated as the information capacity of a modified signal-to-noiseand-interference ratio in the segment. Illustrative map-based deployments of two single
interfering links, sharing two spectrum segments, are made and analyzed in detail. The paper provides illustrations of cases where the autonomously computed competitive optimum
can become significantly worse, and more unfair between links, than the global optimum
solution. The paper makes suggestions about how to improve this situation. The investigations and suggestions provide background information and advice for those interested in the
tradeoffs between centralized and autonomous solutions.
In the fifth paper a comparison is made between autonomous, distributed and central
(broker) architectures, in terms of spectrum decisions, computational complexity and communication of spectrum management data between nodes or between nodes and brokers. An
n-link interference model that includes the same basic assumptions as in the fourth paper
has been used, with a further assumption of ideal administrative communication between
the links. The simulations include a significantly higher number of links and segments. Two
algorithms for distributed spectrum decisions are proposed, both of which target maintaining a minimum rate for each radio link while maximizing the average rate over all links. It
is shown that for given minimum link data rates, better sum data rates can be achieved with
these distributed algorithms than with autonomous polices. The policies and algorithms can
be useful as candidates for the implementation in Dynamic Spectrum Access systems.
Software Defined Radio may have available computing power that it is attractive to
also use for other purposes than the execution of waveforms. In an example of this, in
the overview part, the integration of a Peer-to-Peer (P2P) agent for spectrum brokering has
been sketched out. In this P2P spectrum broker concept, the centralized broker is reformulated into a distributed P2P functionality. By the use of a P2P command interface, spectrum
requests are handled jointly by the relevant peers in an overlay network, rather than by a
DFB.
ii
In summing up the different architectures, it is concluded that they all have advantages
and disadvantages. The distributed interaction one, supplemented by spectrum sensing and
aggregated persistent information in the form of centralized or distributed databases, is seen
as a good compromise. There are advantages in having flexible DSA agents that may coordinate with DFBs or databases where appropriate, but may fall back to distributed coordination
and autonomous decisions when coordination channels are unavailable.
In conclusion, this dissertation has addressed key research issues in the evolvement from
fixed functionality fixed frequency-use radio transceivers to flexible software-defined dynamic frequency use ones. The first paper makes suggestions for and clarifies the remaining
challenges to progress SDR as the preferred radio transceiver implementation technology.
The second paper and the related discussion in Part I of this dissertation provide guidelines
for granularity and performance optimization of SDR-applications in the dominating standardized SDR-architecture, the Software Communications Architecture. The third, fourth
and fifth papers provide proposals for architectures and/or algorithms to enable the SDR
transceivers to dynamically exploit available radio spectrum.
iii
Preface
This dissertation is submitted in partial fulfillment of the requirements for the degree of
Philosophiae Doctor (Ph.D.) at the University of Oslo (UiO). My main supervisor has been
Professor and Director of Research Torleiv Maseng at UiO and Forsvarets Forskningsinstitutt (FFI) respectively, and my co-supervisor has been Professor Frank Eliassen at UiO. The
studies have been carried out full-time between March 2006 and June 2009, and part-time
between July 2009 and October 2010, at FFI and at the University Graduate Center at Kjeller
(UniK).
The studies have been funded by FFI.
The subject areas of the dissertation, Software Defined Radio and Dynamic Spectrum
Access have both been proposed by Torleiv Maseng. Both he and FFI have given me extensive freedom to select specific research topics both within and beyond these areas.
v
Acknowledgements
First and foremost I would like to thank my supervisors, Professor Torleiv Maseng and
Professor Frank Eliassen. Professor Maseng has enthusiastically followed up the progress
of the work several times per week, contributed with and discussed proposals, and also has
made his wide network of contacts available. When setbacks to the work have occured, his
convincing motto of ’persistence and sincerity’ has had a convincing effect also on me, and
made me continue forward. Professor Eliassen has responded quickly when I have emailed
questions, and has also generously offered work proposals.
I also wish to express my gratitude to FFI, represented by Torleiv Maseng, for funding
the studies.
I also want to thank everyone who has given me input and assistance during my work. In
particular I would like to thank Christian Serra, Marc Adrat, Audun Jøsang, Tor Gjertsen and
Asgeir Nysæter for providing input and advice for Paper A, Synnøve Eifring for helping me
with language-related issues of the same paper, Stewart Clark for helping me with languagerelated issues of the same paper and also Part I. I would like to thank Lars Bråten and
Walther Åsen for proof-reading and inputs, Juan Pablo Zamora Zapata for sharing some of
his insights on software architecture and Unni Næss for artwork assistance. Further, thanks
to the co-authors of one or more of the other papers: Torleiv Maseng, Jon Olavsson Neset,
Jørn Kårstad and Toan Hoang. I also want to thank my colleague Elin for having volunteered
to be a test audience for conference presentations. Thanks also to Professor Pål Orten for
providing valuable advice during the final stages of the work.
I am grateful to the administration at UniK for providing a good environment for Ph.D.
studies.
Finally I would like to thank my fiancée Svanhild for morally supporting my studies, for
not complaining about weekend work and for helping me with the proof-reading of articles.
vi
List of Publications
Papers Included in Part II
Paper A Tore Ulversøy, ”Software Defined Radio: Challenges and Opportunities,” IEEE
Communications Surveys & Tutorials, Vol. 12, No. 4, 2010
Online journal: http://www.comsoc.org/livepubs/surveys/index.html
Also available from: http://ieeexplore.ieee.org
Paper B Tore Ulversøy and Jon Olavsson Neset, ”On Workload in an SCA-based System,
with Varying Component and Data Packet Sizes,” NATO RTO Symposium on Military
Communications, Prague, Apr. 21-22, 2008.
Available from: http://www.rta.nato.int/Pubs/RDP.asp?RDP=RTO-MP-IST-083
Paper C Torleiv Maseng and Tore Ulversøy, ”Dynamic Frequency Broker and Cognitive
Radio,” The IET Seminar on Cognitive Radio and Software Defined Radios: Technologies and Techniques, The IET, Savoy Place, London, UK, Sept. 18, 2008.
Available from: http://ieeexplore.ieee.org
Paper D Tore Ulversøy, Torleiv Maseng and Jørn Kårstad, ”On Spectrum Sharing in Autonomous and Coordinated Dynamic Spectrum Access Systems: A Case Study”,
Wireless VITAE ’09, Ålborg, Denmark, May 17-20, 2009.
Available from: http://ieeexplore.ieee.org
Paper E Tore Ulversøy, Torleiv Maseng, Toan Hoang and Jørn Kårstad, ”A Comparison of
Centralized, Peer-to-Peer and Autonomous Dynamic Spectrum Access in a Tactical
Scenario”, MILCOM 2009, Boston, October 18-21, 2009.
Available from: http://ieeexplore.ieee.org
vii
Other Co-authored Publications
In conjunction with the writing of this thesis, the candidate has also contributed to the following conference papers:
S. Singh, M. Adrat, M. Antweiler, T. Ulversøy, T.M.O. Mjelde, L. Hanssen, H. Ozer,
A. Zumbul, ”NATO RTO/IST RTG on SDR: Acquiring and sharing knowledge for
developing SCA based Waveforms on SDRs”, Information Systems Technology Panel
Symposium IST-092/RSY-022 on Military Communications and Networks, Wroclaw,
Poland, September 28-29, 2010.
Available from: http://www.rta.nato.int
T. Hoang, M. Skjegstad, T. Maseng, T. Ulversøy, ”FRP: The Frequency Resource
Protocol”, IEEE International Conference on Communication Systems (IEEE ICCSS
2010), Nov 2010.
Available from: http://ieeexplore.ieee.org
Also in conjunction with the thesis, the candidate has contributed to the following multiauthor Nato Research Task Group reports:
”NATO RTO/IST RTG on SDR Final Report”, Report from IST-080 RTG on SDR,
submitted to NATO RTO/IST Dec. 2010
”Cognitive Radio in NATO”, Report from IST-077 RTG-035 (in preparation)
viii
Contents
Abstract
i
Preface
Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
v
vi
List of Publications
vii
Papers Included in Part II . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
Other Co-authored Publications . . . . . . . . . . . . . . . . . . . . . . . . . . . viii
Contents
ix
Abbreviations
xiii
I Overview
1 Introduction
1.1 Thesis Motivation . . . . . . . . . . . . . . . .
1.2 Research Goals . . . . . . . . . . . . . . . . .
1.3 Research Methodology . . . . . . . . . . . . .
1.4 Short Summary of Results and Implications . .
1.5 Unaddressed Issues / Proposals for Future Work
1.6 Thesis Organization . . . . . . . . . . . . . . .
1
.
.
.
.
.
.
3
5
11
12
14
16
17
2 Background
2.1 A Brief Introduction to Software Defined Radio . . . . . . . . . . . . . . .
2.2 Introduction to Dynamic Spectrum Access . . . . . . . . . . . . . . . . . .
19
19
22
3 Related Work
3.1 Related Work to the Results from Research Goal 1, Conceptual Challenges
of Software Defined Radio . . . . . . . . . . . . . . . . . . . . . . . . . .
3.2 Related Work to the Results from Research Goal 2, SCA Workload Overhead
3.3 Related Work to Research Goal 3, DSA Architectures and Algorithms . . .
27
27
28
32
4 Results and Implications
4.1 Research Goal 1, Software Defined Radio Challenges . . . . . . . . . . . .
4.2 Research Goal 2, SCA Workload Overhead . . . . . . . . . . . . . . . . .
4.3 Research Goal 3, Dynamic Spectrum Access Architectures and Algorithms
39
39
42
58
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
ix
5 Conclusions and Recommendations for Further Work
5.1 Revisiting the Research Goals . . . . . . . . . . .
5.2 Major Contributions . . . . . . . . . . . . . . . . .
5.3 Critical Remarks . . . . . . . . . . . . . . . . . .
5.4 Recommendations for Further Work . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
75
75
75
78
79
References
83
II Included Papers
95
A Software Defined Radio: Challenges and Opportunities
97
1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
2
SDR and the Software Communications Architecture . . . . . . . . . . . . 103
3
SW Architectural Challenges . . . . . . . . . . . . . . . . . . . . . . . . . 104
4
Challenges and Opportunities Related to Computational Requirements of SDR110
5
Security Related Challenges . . . . . . . . . . . . . . . . . . . . . . . . . 115
6
Regulatory and Certification Issues . . . . . . . . . . . . . . . . . . . . . . 122
7
Opportunities Related to Business Models and Military and Commercial
Markets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
8
Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
B On Workload in an SCA-based System, with Varying Component and Data
Packet Sizes
1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2
Analysis Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3
Workload Assessment Through Empirical Analysis . . . . . . . . . . . . .
4
Workload Assessment through Low-Complexity Analytical Models . . . . .
5
Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6
Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
143
147
147
148
152
157
158
158
C Dynamic Frequency Broker and Cognitive Radio
1
Introduction . . . . . . . . . . . . . . . . . .
2
Background . . . . . . . . . . . . . . . . . .
3
The Dynamic Frequency Broker . . . . . . .
4
Conclusions . . . . . . . . . . . . . . . . . .
References . . . . . . . . . . . . . . . . . . . . . .
159
163
163
165
170
170
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
D On Spectrum Sharing in Autonomous and Coordinated
Access Systems: A Case Study
1
Introduction . . . . . . . . . . . . . . . . . . . . . .
2
Background: DSA Concepts . . . . . . . . . . . . .
3
Near-Optimum Spectrum Sharing Study . . . . . . .
4
Conclusions . . . . . . . . . . . . . . . . . . . . . .
References . . . . . . . . . . . . . . . . . . . . . . . . . .
x
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Dynamic Spectrum
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
173
177
177
179
189
189
E A Comparison of Centralized, Peer-to-Peer and Autonomous Dynamic Spectrum Access in a Tactical Scenario
1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2
Background and Calculation Models . . . . . . . . . . . . . . . . . . . . .
3
Spectrum Decisions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4
Computational Complexity . . . . . . . . . . . . . . . . . . . . . . . . . .
5
Spectrum Coordination Traffic . . . . . . . . . . . . . . . . . . . . . . . .
6
DSA in a Hostile Environment . . . . . . . . . . . . . . . . . . . . . . . .
7
Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
191
195
195
198
202
204
205
206
206
xi
Abbreviations
A/D
AEP
API
ASIC
BS
CAB
CCM
CDMA
CDSA
CF
CO
CORBA
CPE
CPU
CR
D/A
DCIE
DCUE
DDS
DFB
DHCP
DIMSUMnet
DPC
DSA
DSAP
DSL
DSP
EAL
EDACC
ESSOR
EVP
FCC
FDMA
FFT
FIR
FPGA
GAO
Analog-to-Digital
Application Environment Profile
Application Programming Interfaces
Application-Specific Integrated Circuit
Base Station
Coordinated Access Band
Configurable Computing Machine, CORBA Component Model
Code Division Multiple Access
Coordinated Dynamic Spectrum Access
Core Framework
Competitive Optimum
Common Object Request Broker Architecture
Consumer Premise Equipment
Central Processing Unit
Cognitive Radio
Digital-to-Analog
DSA Capable Infrastructure Equipment
DSA Capable User Equipment
Data Distribution Service
Dynamic Frequency Broker
Dynamic Host Configuration Protocol
Dynamic Intelligent Management of Spectrum for Ubiquitous Mobile networks
Data Processing Component
Dynamic Spectrum Access
Dynamic Spectrum Access Protocol
Digital Subscriber Line
Digital Signal Processor
Evaluation Assurance Level
Event Driven, Administrative and Control Component
European Secured Software Defined Radio Referential
Embedded Vector Processor
Federal Communications Commission
Frequency Division Multiple Access
Fast Fourier Transform
Finite Impulse Response
Field-Programmable Gate Array
United States Government Accountability Office
xiii
GO
GPP
GPU
GTRS
HF
HTTP
HW
IDL
IF
IP
IST
ITU
IWF
JPEO
JTRS
kbps
MAC
MDD
MHAL
MILS
MIMD
MIMO
MIPS
MMT
MOM
MPMB
NC-OFDM
NCO
NIAG
OMG
ORB
OS
OSA
OSSIE
P2P
PCI
PCS
PDMA
PIM
PR
PSM
QoS
R&TTE
RANMAN
RF
RTG
RTO
xiv
Global Optimum
General Purpose Processor
Graphical Processing Unit
Swedish Common Tactical Radio System
High Frequency
Hypertext Transfer Protocol
Hardware
Interface Definition Language
Intermediate Frequency
Internet Protocol
Information Systems Technology Panel
International Telecommunication Union
Iterative Waterfilling
Joint Program Executive Office
Joint Tactical Radio System
Kilo-bits per second
Medium Access Control
Model-Driven Development
Modem Hardware Abstraction Layer
Multiple Independent Levels of Security
Multiple Instruction Multiple Data
Multiple Input Multiple Output
Million Instructions Per Second
Mobile Multi-standard Terminal
Message-Oriented Middleware
Multi-Protocol Multi-Band
Non-Contiguous Orthogonal Frequency Division Multiplexing
Network Centric Operations
NATO’s Industry Advisory Group
Object Management Group
Object Request Broker
Operating System
Opportunistic Spectrum Access
Open-Source SCA Implementation Embedded
Peer-to-Peer
Peripheral Component Interconnect
Partitioning Communication System
Polarization Division Multiple Access
Platform Independent Model
Packet Rate (data packets processed per second)
Platform Specific Model
Quality of Service
Radio and Telecommunications Terminal Equipment Directive
Radio Access Network Manager
Radio Frequency
Research Task Group
NATO Research and Technology Organization
RX
SCA
SCARI
SDMA
SDR
SIMD
SK
SNR
SOAP
SPIM
STRS
SW
TAO
TC
TCAM
TDMA
TGS
TI
TX
UAV
UDDI
UMTS
USRP
UWB
WF
WRAN
WSDL
XML
Receiver
Software Communications Architecture
Software Communications Architecture - Reference Implementation
Space Division Multiple Access
Software Defined Radio
Single Instruction Multiple Data
Separation Kernel
Signal-to-Noise Ratio
Simple Object Access Protocol
Spectrum Information and Management
Space Telecommunication Radio System
Software
The Ace ORB
Trusted Computing
Telecomm. Conformity Assessment and Market Surveill. Committee of the EU
Time Division Multiple Access
The TCAM Group on SDR
Texas Instruments
Transmitter
Unmanned Aerial Vehicle
Universal Description, Discovery and Integration
Universal Mobile Telecommunications System
Universal Software Radio Peripheral
Ultra Wide Band
Waterfilling
Wireless Regional Area Network
Web Services Description Language
Extensible Markup Language
xv
Part I
Overview
1
Chapter 1
Introduction
The short history of wireless communications, taking Hertz’ 1887-1888 experiments as the
start [1], has shown breathtaking advances in both the efficiency by which the spectrum is
exploited and in the quality and variety of the wireless services offered.
In only the last two to three decades, revolutionizing services have appeared, such as the
digital mobile phone systems, wireless local area networks, personal area data networks, digital TV broadcasts, broadband data mobile services and global positioning services. These
astonishingly rapid advances have been made possible by technological innovations in many
different areas, ranging from implementation technology through coding inventions and to
advances in computer network and distributed processing technology.
Radio implementation technology has gone through several technology stages, starting
with the spark gap transmitter era from 1887/1888 to approximately the mid 1920s, known
as the ’electric’ era of radio technology. This was overlapped and followed by an ’analog electronics’ era, characterized first by the vacuum tube-based transceivers and subsequently supplemented and followed by transistor equipment some years after the discovery
of the transistor in 1947 [2]. Towards the end of the 1950s, the idea to integrate a multitude of transistors on a single substrate was pursued [3], leading to integrated circuits of
increasingly higher transistor densities, and where the number of components that could be
cost-effectively placed on the integrated circuits were later shown to increase by a yearly
factor [4], now referred to as Moore’s law. The integrated circuits allowed for denser integration of the analog electronics, but more importantly paved the way for integrated digital
circuitry and for the era of the digital transceiver, in which major parts of the signal processing are conducted in the digital domain. Of particular importance in the context of this thesis,
was that this development also prepared the grounds for the Digital Signal Processor (DSP),
appearing from 1978/1979 [5] and the Field-Programmable Gate Array (FPGA), invented
in 1985 [6], as well as the general purpose microprocessor that came as early as 1971 [7].
These components gradually allowed increasing parts of the radio transceiver functionality
to be implemented in software, as pointed out by Mitola in 1991 [8, 9], who then coined the
term ’Software Radio’. This leads into the ’software’ era of radio technology and into one
of the research topics of this thesis, the Software Defined Radio (SDR).
The use of software in the implementation of radio transceivers has not only been motivated by the implementation technologies that have become available, but has also been
influenced by several other technological developments. The evolvement of the computing
discipline itself, including computer architecture, computer networks and distributed processing technologies, are of course part of this. The evolution from radio links and broad3
1. I NTRODUCTION
cast transmissions to networked radio communications, where the ALOHA network is an
early example [10], is another factor in that networked radio communications involved an
increased protocol content that favoured a software implementation. The creativity in the
signal processing and information theory disciplines, example innovations being the 1993
Turbo codes [11] and the Multiple Input Multiple Output (MIMO) [12] technology, resulting
in rapid releases and a multitude of wireless standards both within commercial and military
domains, are also motivating software implementations. However, there are still many challenges related to software-based transceiver implementation and the transition from ’digital
transceivers’ is slower than many expected, as will be discussed in this thesis.
The negative effects of interference between different radio stations were discovered
very early in the history of radio communications, and at that time being a major concern
for safety at sea, e.g. in terms of interference with distress messages from ships. The need
for national and international regulations of the use of the radio spectrum were thus clearly
seen. The first international spectrum allocation regulations were put in place through the
1906 Berlin agreement [13], and the World Radio Conferences have since become and are
an arena for international radio spectrum agreements. The national spectrum management
authorities provide the specific spectrum assignments to radio network operators or users
consistent with these international regulations. While this detailed regulatory approach unquestionably has been an important factor in the success of wireless communications, it has
been pointed out that the processes involved are slow and the resulting efficiency in the use
of the spectrum is low. From about 1980 and on we have, however, seen a trend of deregulation of spectrum access, putting more focus on market-based use of spectrum and that the
spectrum users know how to make the best use of the spectrum compared to detailed government regulation [14]. In recent years, the most remarkable evidence of the deregulations
are the tremendous success of wireless local area network devices in unlicensed open-access
bands, and the in-progress opening-up to secondary access of spectrum assigned for primary
use in TV broadcasting. The deregulation may be seen as a political, liberal trend, but it is
also a result of the increasing needs for wireless communications, due to e.g. the evolution
of the personal mobile phones, the success of the Internet and the connected everywhere
paradigm causing a need for more effective use of the spectrum. As well, it is a result of
the impracticalities of manual regulatory processes in contrast to the ever-increasing number
of transceivers. Nowadays, computerized devices can have several transceivers and are in
networked contact with other such computerized devices, and in the future such computerized devices are likely to be integrated in humans, pets and a vast variety of the ’things’ in
our surroundings. A number of researchers have in recent years argued for further spectrum
efficiency increases through radio systems that efficiently and dynamically take advantage
of the available spectrum, a lot of activity is focused on this field but still a number of challenges exist.
The software approach to radio terminals is one of the factors making such a dynamic
and deregulated approach to frequency more and more viable. The use of the SDR for decentralized dynamic spectrum access decisions, enabled through the pairing of software radio
technology with artificial intelligence which was coined ’Cognitive Radio’ [15], was suggested by Mitola and Maguire in 1999. Software Defined Radio may run both the processing
of the sensing of the radio spectrum and other forms of ’context awareness’, as well as the
spectrum decision algorithms. For such decentralized spectrum access changes, or for dynamic changes dictated by other mechanisms as described later in this thesis, the Software
Defined Radio provides the runtime flexibility needed to dynamically change waveforms,
4
1.1. Thesis Motivation
such that the spectrum resources may be dynamically and optimally exploited.
Software Defined Radio, Cognitive Radio and Dynamic Spectrum Access (DSA) radio
terminals are often referred to as emerging radio systems, and are predicted to be major
trends in the next decades of wireless communications. The research goals in this thesis are
within these emerging radio technologies.
In this introductory chapter, the motivations for the research goals are described first, and
illustrated by example wireless scenarios. Then the specific research goals are defined. This
is followed by a description of the research methods used in pursuing them. The research
contributions are then summarized. Lastly, suggestions for further work are provided, as
well as an overview of the remaining parts of the thesis.
1.1
Thesis Motivation
Todays wireless communication systems and scenarios are still dominated by ones where
the waveform-functionality is fixed when the units leave the factory, not allowing waveform
upgrades. Similarly the bulk of the systems are not prepared for flexible use of spectrum,
the use of spectrum instead being dictated by lengthy regulatory or frequency planning processes.
Particularly in military radio communication there is a striking contrast between the long
acquisition and system lifetime (i.e. decades) of the radio communication systems on the one
hand, and on the other hand the need for changes and upgrades to waveforms on these units
that are on a much more rapid timescale, and that are disallowed by these radio platforms
without costly unit hardware upgrades. Similarly there is a striking contrast between the
laboursome frequency pre-planning processes for ad hoc operations, and the need for indeployment rapid frequency changes e.g. due to additional system deployments, network
mobility or interference from other systems.
The research goals, described in Chapter 1.2, are motivated as individual sub-goals
within an overall vision of moving away from these wireless scenarios and systems where
transceiver functionality, frequency assignments and also the roles of the transceiver are predominantly fixed or preplanned. The envisioned future scenarios are ones where transceivers
have reconfigurable functionality, dynamic frequency assignments, dynamic transceiver
roles and the networks ultimately are wide-coverage multi-frequency ones.
As a further motivation that leads to the research goals, issues with some presently deployed wireless systems will be reviewed.
1.1.1
Issues with Presently Deployed Wireless Systems
Consider the scenario depicted in Figure 1.1, which contains a few typical presently deployed civilian wireless systems: A mobile phone cellular access system, a terrestrial TV
broadcaster and receivers, and a satellite services system.
Typical for the transmitters and receivers in this scenario is that they conform to one or
a fixed small number of waveform standards. The waveform functionality, reflecting each
waveform standard, in the transmitters and receivers, is also fixed and is not programmable.
When new waveform standards are introduced in Figure 1.1, e.g. a higher-bitrate standard
for the cellular access system, a replacement of the whole transmitter/receiver is needed
5
1. I NTRODUCTION
UN, FFI 2010
F IGURE 1.1: A civilian wireless scenario with some typical system elements: Cellular access, TV
broadcast and satellite services. The colored circles and lines illustrate different spectrum intervals.
(referred to by market analysts as the ’handset replacement model’ [16]). Unfortunately,
this replacement model is a large waste of resources.
A further characteristic of the systems in Figure 1.1 is that the frequencies that they
use are typically assigned by the relevant National Frequency Authority as long-duration
(normally several year) licenses. Such fixed long-term assignments provide excellent interference control. However, measurement campaigns have shown [17, 18] that typically only
a few percent of these frequencies are actually occupied at any time and location, i.e. the
frequency use is inefficient. This wasteful use of the frequency resources is in contrast to the
increasing need for wireless communication capacity for existing and new wireless services.
Another characteristic is also that the radio terminals have a specific, factory defined set
of available ’roles’, e.g. the role as a cellular mobile device and the role as a WiFi data access
terminal. They may, however, not be field configured to fulfill new roles, such as enabling
the mobile cellular access terminal to additionally be a satellite terminal.
Figure 1.2 illustrates typical ad hoc deployed systems in the same way, such as in a
peace-keeping mission: The workhorse in the operations is the mobile operation ad hoc network, which serves as the main voice and data communication link for the mobile units, and
which may work autonomously or as illustrated be linked to a headquarters or wider network. Since typically several partner nations cooperate, a mobile operation ad hoc network
of a coalition partner in the same mission is also illustrated. Typically many sources of interference may exist in such a scenario, including unregulated ad hoc networks of uncertain
origin and ownership, as well as pirate broadcasters. Additionally, typical civilian systems
such as in Figure 1.1 are likely to coexist in the same scenario.
In the same way as in the previous figure, the transceivers in the scenario conform to one
or a small number of fixed waveform standards, with factory-defined waveform functionality. For a peace-keeping mission, this has the effect that when a new coalition partner, typically using a different set of waveform standards, enters the scenario, transceivers typically
6
1.1. Thesis Motivation
Unregulated
broadcast
Partner network
Mobile operation
ad hoc network
Unregulated
ad hoc network
Partner
network
UN, FFI 2010
F IGURE 1.2: A wireless scenario for an ad hoc operation, e.g. a peace-keeping mission, with some
typical system elements: Ad hoc networks, radio links, satellite services. Colored circles / lines depict
different spectrum intervals. Issues arise due to unregulated electromagnetic activity causing interference, and due to cooperating parties using radio equipment with hardware-defined incompatible
waveforms.
need to be loaned between the coalition partners in order to be able to inter-communicate.
Alternatively, the coalition partners would both need to replace their transceivers. This
causes both logistical and lead-time issues in the set-up of such an operation, and also involves additional expenses.
Also, in the same way as in the previous figure, the radio terminals have factory-defined
roles, not allowing the ad hoc terminals to take on new roles (e.g. as maritime communication devices) if they did not have this functionality when leaving the factory. Again
this leads to logistics and lead-time issues in getting additional equipment in place if the
communication needs in the operation change during its progress.
The frequencies used by the systems in Figure 1.2 are typically planned prior to the actual mission, based on the knowledge as to what interference will be present in the actual
scenario, and on the available, but imperfect, knowledge about which other networks the
transceivers will have to coexist with. By experience [19], this process does not always yield
good network connectivity in the actual scenario, due to unexpected interference from such
sources as pirate transmitters, unregulated devices, rescue workers fielding their own communication systems in an uncoordinated manner and other types of unplanned interference
(illustrated by an unregulated ad hoc network and an unregulated broadcaster in Figure 1.2).
This type of complex, partly unregulated and partly hostile electromagnetic environment is
another argument for the need for more agile spectrum behavior in the radio system.
A further typical characteristic of the ad hoc networks in Figure 1.2 is that they are
single-frequency-interval ones. This implies that all the nodes in one network, in a time
divided way, compete for the same part of the wireless medium. This results in frequent
7
1. I NTRODUCTION
data packet collisions, and hence low throughput, when the node density is high.
1.1.2
Envisioned Future Wireless Scenarios
Conventional fixed
freq. broadcaster
Conventional
mobile access
DSA mobile access
DSA multihop
multifrequency best
effort access
UN, FFI 2010
F IGURE 1.3: An envisioned future wireless systems scenario with a mixture of conventional fixed
frequency mobile access, dynamic spectrum mobile access and best-effort dynamic spectrum access
multihop ad hoc networks.
The envisioned future version of the Figure 1.1 scenario is illustrated in Figure 1.3 which
illustrates a mixture of the conventional cellular access and broadcast systems we know
today, with new ones utilizing various forms of dynamic access to the radio spectrum.
In the figure, all the radio nodes are assumed to allow loadable waveform functionality, rather than having fixed, factory-defined functionality. As new communications standards are released, this means that, within the hardware processing limitations of the radio
platforms, the standards can be implemented as software upgrades in the radio nodes, and
even as automatic over-the-air downloaded upgrades. This also implies that the radio functionality may be software-adapted to the actual needs in the user scenario. With sufficient
flexibility of the radio platforms, such adaptations may also include dynamic, rapid changes
to waveforms in order to optimally exploit available spectrum resources and dynamically
access new spectrum resources as needed. The waveform loadability and radio platform
flexibilities also enable radio nodes to be software-uploaded to fulfill new roles.
As mentioned, better utilization of the electromagnetic spectrum may be achieved by
allowing the spectrum assignments to be dynamic rather than fixed. It is anticipated that
such dynamic access systems will coexist with systems that will remain fixed licensed ones,
at least for the not-to-distant future. This is illustrated in the figure with (to the left) a
conventional mobile access system and a conventional broadcaster with fixed frequency assignments. The Dynamic Spectrum Access mobile access network (to the right of these)
on the other hand may benefit from dynamically available spectrum and provide cheaper
mobile access, but with a lesser quality-of-service guarantee.
8
1.1. Thesis Motivation
Another envisioned way of exploiting Dynamic Spectrum Access may be DSA multihop,
multifrequency networks, as illustrated to the right in the figure. Here, the communication
may be forwarded by several radio nodes, and different spectrum intervals may be used at
the different links in the network. The traffic is envisioned to be connected to the backbone
network utilizing very different types of access, depending on what is available at given
locations, ranging from satellite communication through cellular infrastructure access to
low cost or freely available connections to home networks or to networks at universities or
organizations. For the consumer domain, such networks are envisioned to provide cheaper
mobile data access on a best-effort basis.
A further argument for the need of Dynamic Spectrum Access solutions is also new wireless services that will appear and need communication capacity, and the enormous number
of wireless devices that will be present. Examples of such services are home multimedia
broadcasts, camera to home server video and picture transfers, medical sensors on humans
and pets that communicate with hospital servers or pet clinics, respectively.
The envisioned future version of the scenario in Figure 1.2 is illustrated in Figure 1.4.
Figure 1.4, like Figure 1.2, shows a mobile operation ad hoc network and a coalition ad
hoc network, as well as terrestrial and satellite links, together with sources of interference
that may be expected in such a scenario. In contrast to Figure 1.2, all the radio nodes are
software-defined ones, providing flexibility both in terms of waveform functionality, radio
terminal roles and in the use of spectrum. Additionally it is assumed that the waveform
software and radio platforms are defined in such ways that the software to a large degree
does not rely on one specific platform.
Unregulated
broadcast
Unregulated
ad hoc network
Partner
network
Mobile operation
ad hoc network
UN, FFI 2010
F IGURE 1.4: An envisioned future wireless systems scenario for an ad hoc operation or peacekeeping mission. The systems may dynamically avoid interference sources. As the radio nodes are
SDR ones, waveforms to enable interoperability between coalition partners may be loaded as needed.
Dynamic use of frequencies throughout the multi-hop network enables flexible, high number of nodes
wide-area coverage.
9
1. I NTRODUCTION
In the scenario in Figure 1.4, the waveform loadability properties in combination with
such waveform to platform independence properties, mean that the interoperability issue between the partner network and the mobile operation network may now be solved by moving
the waveform application from either type of radio platform to the other one. Alternatively,
a special ’coalition’ waveform software may be loaded onto both types of radio platforms to
enable interoperability.
The waveform loadability and radio platform flexibilities enable radio nodes also to be
software-uploaded to fulfill new roles. Thus in the scenario in Figure 1.4, one fulfilling
the mobile ad hoc role may load a waveform in order to simultaneously provide satellite
services.
The DSA ad hoc network, instead of using pre-planned frequencies, may dynamically
avoid the unregulated sources of interference in the scenario, and even simultaneously run
defensive and offsensive counter-efforts towards these unregulated radio nodes. Multihop DSA, with different spectrum intervals used on different links or subnets [20], enable
scenario-adapted wide-coverage ad hoc networks.
1.1.3
Motivation for the Research Goals
Obviously, a vast number of research challenges are present in order to move from the
present-type wireless communication as depicted in Figure 1.1 and Figure 1.2 to reach the
envisioned scenarios in Figure 1.3 and Figure 1.4. The research goals in this thesis are
motivated from observations of individual challenges that are small pieces in what is needed
for this transformation to occur.
A cornerstone component at the bottom of this vision is the flexible radio platform enabled by the Software Defined Radio implementation technology. SDR has been a hot research topic for almost two decades, and with some of the base technologies having matured
far longer than that. At the start of this thesis work in 2006, a picture of SDR emerged
that was full of contrasts and on the one hand a lot of optimism: A number of books, articles and presentations with very visionary and optimistic thoughts of SDR existed. On the
other hand, there were disappointing observations: A number of optimistic projections for
SDR to take over as a baseline technology, for example the projection in [21], had proved
to be too optimistic. Mobile phones, although they had evolved to multi-standard devices,
were still using conventional digital design and not SDR. Also the prestigeous US Defence
Joint Tactical Radio System (JTRS) SDR program had experienced severe difficulties and
delays [22], and comments were made with respect to severe remaining risks. This cornerstone importance of Software Defined Radio within the overall vision, and this contrasting
picture of its status and remaining challenges, have motivated the first research goal, related
to SDR conceptual challenges, as described in the next section.
Flexible radio platforms become an even more significant enabler in the above envisioned future scenarios, if the platforms are standardized such that individual waveform
applications may have a larger market, or at least such that waveform applications can be
moved from platform to platform with moderate effort. The dominant SDR architectural
standard is the Software Communications Architecture (SCA). SCA promotes a platformto-application separation by specifying an SCA platform environment for SCA-based application components. Each application component may be executed on one of the individual
processing elements within a distributed-processing platform. A fine granularity of these
applications promotes code reuse and thereby the efficiency of building new waveform ap10
1.2. Research Goals
plications, in that individual components more easily may be reused in new applications.
However, the finer the granularity, the more processing cycle overhead is generated. A wish
to find out more about the negative effects of the Software Communications Architecture approach in the form of these overhead losses and their implications on application granularity,
has motivated the second research goal, as described in the next section.
Dynamic radio spectrum assignments are a key factor in the future scenarios in both
providing a higher spectrum utilization and also for ad hoc wireless network deployment
in complex environments with unknown electromagnetic activity. Although in principle
dynamic sharing of spectrum may seem straightforward, it triggers numerous technological
questions.
In particular the author observed in the early research on Dynamic Spectrum Access, that
there was an unrealistically high reliance on spectrum sensing of other radio transceivers in
the suggested approaches. Issues such as silent receivers that are not detected by sensing,
and symmetry issues with low-power transceivers that are difficult to detect yet may still be
disturbed by higher-power DSA transceivers, pointed in the direction that spectrum sensing
alone is likely to be inadequate as the single source of input for doing DSA. This influenced
the research in the direction of other ways of doing DSA than just sensing.
With a base focus on the intersection between distributed processing and radio technology, and coming from the distributed processing environment of the Software Defined Radio, it was natural to take the distributed processing view also towards DSA. Together with
the above observations on sensing feasibility, this motivated the study of DSA architectures
part of the third research goal. The ’algorithms’ part was motivated from the challenging
nature of the associated issues.
Some further motivations for the third research goal were the interaction with the NATO
research group RTO/IST RTG on Cognitive Radio in NATO and also reading of Mitola’s
dissertation on Cognitive Radio [23].
1.2
Research Goals
Motivated from the challenges described above, this work has had the following research
goals:
1.2.1
Research Goal 1: The Conceptual Challenges of Software
Defined Radio
The concept of implementing radio waveform functionality in software deployed on a nonwaveform-specific platform has many associated challenges. Some of these have been
solved, partly or wholly, but some still remain challenges at the time of this writing. The
first research goal targets this issue:
• To assess the degree to which the conceptual challenges of SDR have been solved
and what challenges still remain in the fields of software architecture, computational
requirements, security, regulations and business structure.
11
1. I NTRODUCTION
1.2.2
Research Goal 2: SCA Workload Overhead
The second research goal explores the negative effects of the Software Communications
Architecture on processing efficiency. The goal is:
• To analyze the workload overhead of Software Communications Architecture-based
SDR in a specific processing configuration.
The details of the specific processing configuration are described in Paper B in Part II.
1.2.3
Research Goal 3: Dynamic Spectrum Access Architectures and
Algorithms
The third research goal in this thesis is:
• To contribute to establishing practical ways of dynamically sharing the frequency
spectrum and in particular compare autonomous, distributed and centralized spectrum
sharing architectures.
The scope of the algorithms1 part of this research goal is limited to using the Waterfilling
algorithm as described in [24] as a starting point for the work.
1.3
Research Methodology
1.3.1
Methodology for Research Goal 1: Conceptual Challenges of
Software Defined Radio
The main research method used in this part of the work is a thorough study of the published
literature. Knowledge has been gathered by investigating sources such as IEEE Explore [25],
SpringerLink [26] and the ACM Digital Library [27], and by doing broader searches using
Google Scholar [28] and Google [29]. Further, the electronic proceedings from the major
yearly conference in the SDR arena, the Software Defined Radio Technical Conference, have
been searched. In order to get the broader overview, relevant chapters of recent textbooks
[21, 30–32] on the subject have been studied.
In order to get a deeper understanding of the SDR challenges and allow for in-depth
discussions with an international group of experts, the author has attended and contributed
to the work of a research task group on SDR, the RTO-IST-080 ”Software Defined Radio”.
The task group has facilitated information sharing between its members, and has performed
experiments and demonstrations [33, 34] using Stanag 4285 [35].
Design experiments with SDR applications, mainly using the Stanag 4285 code, have
also been used as a method to get hands-on experience, both with SDR application development in general, and also to get experience with various development tools. Two different
types of SDR platforms have been used. The first is a commercial platform, the flexComm
Waveform Design Studio made by Spectrum Signal Processing. The second is a Linux PC
1
In this thesis, the term algorithm has been used both when describing centralized calculations as well
as distributed and autonomous calculation processes. While each autonomous or distributed agent may do its
decision calculation according to an algorithm, a better term for the total process of calculating a resulting state
between a number of autonomous or distributed radio agents is interactive computation.
12
1.3. Research Methodology
based platform, using the OSSIE [36] SCA-based Core Framework made by Virginia Tech,
and optionally an Universal Software Radio Peripheral (USRP) for transforming the signal
samples to/from the radio frequency domain.
The paper has been reviewed by IEEE-appointed experts. During the process, the reviewers have suggested improvements and requested expansion in certain parts. These improvements and expansions have been included in the final version of Paper A that is included in
Part II of this thesis.
1.3.2
Methodology for Research Goal 2: SCA Workload Overhead
Research goal 2 is entirely within the computer science discipline. As cited in [37], Denning
et al. have proposed three major paradigms for this discipline; theory, abstraction and design
[38]. Denning et al. comment that the three paradigms are intertwined [38].
TABLE 1.1: Stages in the theory, abstraction and design paradigms, quoted from [38].
Stage
(1)
(2)
(3)
(4)
theory
characterize objects of
study
hypothesize possible
relationships
among
them
determine whether the
relationships are true
interpret results
abstraction
form a hypothesis
design
state requirements
construct a model and
make a prediction
state specifications
design an experiment
and collect data
analyze results
design and implement
the system
test the system
Other authors have suggested finer grained division of scientific research methods. For
example, Glass [39] promotes a taxonomy of four methods, referred to as the scientific, engineering, empirical and analytical methods. In another example [40], 22 different research
methods are listed.
Here, the targeted methodology for research goal 2, following the terms by Denning
et al., was the abstraction paradigm, forming a hypothesis and creating a model, then attempting to verify this model with measured data. In setting up the system in which the
experiment is conducted, elements of the design paradigm are mixed in.
The specific choices as to which type of models, techniques and measurements that were
used, and the reasoning behind these choices, are described in Chapter 2 of paper B and in
Section 4.2.1.1 in Part I.
The experimental work was preceded by a literature study. The same sources of literature
as cited in 1.3.1 were used in order to find relevant literature. This work benefitted from
reading Fortier and Michel’s book on systems performance evaluation [41] as preparation.
1.3.3
Methodology for Research Goal 3: Dynamic Spectrum Access
Architectures and Algorithms
Research goal 3 is rooted in a broader scientific context, including information theory, mathematics, radio technology and computer science.
13
1. I NTRODUCTION
In relation to the spectrum sharing part of the goal, the methodology is mainly an engineering one in Glass’ terms (and which resembles the design paradigm in the terms in [38]):
observe existing solutions, propose better solutions, measure and analyze.
In observing existing solutions, a literature study was conducted, again with the same
sources as used above for the search. The study also included reading some of the recent
textbooks on Cognitive Radio, including Fette’s [42] work.
The next step is propose better solutions, an example being the suggested distributed
interaction algorithm in Paper E in Part II of this thesis.
Measure in the strict sense means measuring the actual performance of the actual proposed system in the actual real world surroundings. Mainly due to time limitations, Paper
C only has a qualitative discussion of the proposed concepts, and in Papers D and E, Matlab computer calculations and simulations with idealistic assumptions are presented. Such
simulations provide information on how the proposed solutions work in the simulated environment. Depending on the accuracy of the simulated environment, the simulations can
provide a variable degree of support for the proposed solutions. Such simulations, however,
cannot be regarded as scientific evidence that the proposed better solutions work with the
real system in the real world. Hence, further work is needed in this area in order to verify
the performance in real-world surroundings. The simulation assumptions and their validity
are discussed in Section 4.3.2.2.
The analysis step is based mainly on the simulation results, for which the same comments as above apply.
In order to discuss the results in this research topic with an international group of experts,
the author participated in the research task group RTO-IST-077 ”Cognitive Radio in NATO”.
1.4
Short Summary of Results and Implications
1.4.1
Research Goal 1: Conceptual Challenges of Software Defined
Radio
Research goal one is answered in Paper A in Part II. The paper provides a comprehensive
summary of SDR challenges and opportunities, and makes suggestions for the future path
of SDR.
The paper contains a brief summary of the Software Communications Architecture.
While one of the goals of SCA is to make portability easier, there are still remaining challenges in this area, as detailed in the ’SW Architectural Challenges’ section. This section
also reviews the present state of SCA tools, and discusses alternative architectures to SCA.
In the ’Challenges and Opportunities Related to Computational Requirements of SDR’
section, computational requirements of SDR as well as available processing elements are
reviewed. The most interesting trend in this area, for handheld units, is seen to be multiple
Single Instruction Multiple Data (SIMD) core processors.
There is a lot of research literature on aspects of security such as secure downloading of
applications and protection of the SDR platform against attacks. The portability of the security components of SDR applications between different platform domains is still a challenge,
however. Ways of dealing with this issue are suggested.
14
1.4. Short Summary of Results and Implications
Regulatory changes are already introduced, and further regulatory changes that are considered still needed, are described. The need for an SCA certification authority and procedure for the European domain is pointed to.
Product opportunities in both the military and civilian domains are reviewed. Such
opportunities include military cognitive communication systems and military waveform libraries. Examples of civilian opportunities are multiprotocol multiband base stations, mobile multi-standard terminals and satellite devices.
The paper has a comprehensive reference list with 151 references including a number of
landmark SDR publications.
This paper is the only SDR tutorial in the IEEE Communications Surveys and Tutorials
and may serve as an introduction for those entering this area of research or engineering. The
paper also provides suggestions to the SDR community as to how SDR could be further
developed.
1.4.2
Research Goal 2: SCA Workload Overhead
This research goal is treated in Paper B, as well as in the follow-up work to Paper B, in Part
I of this thesis. Paper B investigates the effects of varying the granularity of an SCA-based
application on the total workload of a processor. Empirical analysis is made. Also, two
simple analytical models are formulated for the total workload, one that does not include the
effects of context switches, and another that includes these effects. Comparing results from
the first model with measurements shows that it includes the dominant part of the workload overhead but underestimates the total workload of the system. This underestimation is
greater the larger the packet size. A likely major contributor to the underestimation is that
the first model does not include the indirect losses due to the required updating of caches
following a context switch.
Table 2 in the paper also illustrates the large startup cost for pushing a minimum packet
from one component to another, suggesting that for workload optimization it is advantageous
to avoid the frequent intercomponent transmission of small packets.
In the follow-up work in Part I, further optimizations of SCA-based environments and
applications are described and further measurements provided.
The results provide guidelines for application granularity of SCA-based applications for
designers of such applications, when several components run on the same CORBA-enabled
processor. The results may also be used to provide guidelines for packet size versus packet
frequency for such applications. The results also provide guidance as to other factors in
SCA-environments and applications that can be optimized.
1.4.3
Research Goal 3: Dynamic Spectrum Access Architectures and
Algorithms
The results related to this research goal are provided in Papers C through E, as well as in
Section 4.3.1.3 in Part I.
The architectural part of the research goal is mainly treated in Paper E, which provides
a comparison between DSA architectural concepts. The main conclusion drawn is that the
distributed (decentralized) architecture, preferably with combinations of local information
exchange and indirect information in the form of databases and sensed spectrum, is the
15
1. I NTRODUCTION
favored one, but that centralized, distributed and autonomous architectures all have their advantages and disadvantages. A particular architecture, the hierarchical central coordination
concept named the Dynamic Frequency Broker (DFB), is outlined in Paper C. Also, Section
4.3.1.3 in Part I provides a description of a Peer-to-Peer (P2P) DSA architecture, where the
main idea is that of a distributed spectrum broker functionality and interaction between the
radio peers on a P2P overlay rather than bound to specific physical coordination channels.
The algorithm part of the research goal is treated in Papers D and E. Paper D compares
autonomously calculated power versus frequency assignments in a link interference model to
centrally calculated global optimums. Paper E proposes three autonomous policies and two
distributed interaction algorithms for power versus frequency assignments. It also provides
Matlab simulation results for these proposed policies and algorithms. The simulations, under
the specific assumptions made for the simulated environment, show that when requiring
all links to meet the same minimum rate, the distributed interaction algorithms provided a
higher average rate over all the links, than the autonomous policy-governed cases.
1.5
Unaddressed Issues / Proposals for Future Work
1.5.1
Research Goal 1: Conceptual Challenges of Software Defined
Radio
This research goal does not include the analog hardware and hardware-component-related issues of SDR. To some extent, analog-to-digital conversion issues, however, are dealt with in
Section 2.1. Furthermore, Paper A and Section 3.1 provide advice about additional sources
of information on these aspects. As is evident from Section 3.1, there is already a rich variety of publications available dealing with hardware issues of SDR, and as such there is not
an imminent need to extend the work of Paper A in this direction.
Software Defined Radio in relation to the new trend of Green Telecommunications is
also a topic which is not dealt with in Paper A. In light of the societal importance of the
environmental aspects of telecommunications, this is a topic that deserves further work.
1.5.2
Research Goal 2: SCA Workload Overhead
The work on this research topic did not comprise verification of modeled versus measured
data of the second derived model in Paper B, as accurate estimates of the parameters in
the second model were not within the scope of the work. Further research on the second
model including improved methodology for estimation of the parameters is suggested as an
extension of this work.
The research covers the single-processor implications of application granularity, as well
as multiprocessor environments that are such that more than one component needs to be
run on each CORBA-capable General Purpose Processor (GPP). A natural extension of the
work, however, is towards workload optimization in a general heterogeneous multiprocessor
environment. A qualitative overview of considerations for such environments is given in
Section 4.2.2 and a more detailed, quantitative analysis is suggested as future work in this
area.
16
1.6. Thesis Organization
1.5.3
Research Goal 3: Dynamic Spectrum Access Architectures and
Algorithms
The results from this research goal are based largely on numerical calculations and simulations, as well as references to the literature. Due to the fact that the time required and other
resources needed to build experimental networks exceeded what was available, the plan of
the work did not include measurements of experimental systems. The design of experimental networks for further verification and experimentation with the suggested algorithms is
suggested as a future work topic.
1.6
Thesis Organization
This thesis is organized in two parts. Part I provides an overview of the research work and a
summary of the resulting published papers and Part II contains these papers.
In Part I, Chapter 2 provides some further introduction to the research areas of this thesis,
Software Defined Radio and Dynamic Spectrum Access, for the benefit of the reader who
does not already have detailed knowledge about these topics. Chapter 3 contains a review of
research related to the three research goals. Chapter 4 summarizes the contributions related
to the same three research goals. Chapter 5 contains conclusions and suggestions for further
work.
In the papers in Part II where there are co-authors, the contributions of this author is
detailed on the front page of each paper.
17
Chapter 2
Background
2.1
A Brief Introduction to Software Defined Radio
The term ’Software Radio’ was coined by Joseph Mitola III in 1991 [9], who also described
the concept in [8]. The term ’Software Defined Radio’ was adopted [32] by the SDR Forum,
now the Wireless Innovation Forum1 , a membership organization for the different actors
with an interest in SDR.
While there is no globally unique definition of Software Defined Radio, many definitions
exist, as proposed by different authors and organizations. The Wireless Innovation Forum
defines SDR as a ”Radio in which some or all of the physical layer functions are software
defined” [43]. The Federal Communications Commission (FCC) similarly has its own definition [44], that also includes the software that affects the compliance with FCC rules: ”A
radio that includes a transmitter in which the operating parameters of frequency range, modulation type or maximum output power (either radiated or conducted), or the circumstances
under which the transmitter operates in accordance with Commission rules, can be altered
by making a change in software without making any changes to hardware components that
affect the radio frequency emissions.” Some sources define alternative terms (e.g. ’Flexible
Architecture Radio’ [32]) depending on how much of the radio functionality is implemented
in software. A main parameter in the various definitions and alternative terms, is how flexibly the radio waveform can be changed through changing software without modifying the
SDR Platform. The ’waveform’ term in SDR includes the physical layer of a standard, but
may also include higher levels of the relevant wireless standard.
The trend to include more and more software as part of radio designs had, however,
started well before 1991, as also pointed to by Mitola. A key influencing factor was the
introduction of the Digital Signal Processor (DSP), with the TMS320 [45] from Texas Instruments, introduced in 1982, being an important example, enabling parts of the signal
processing to be implemented in software. A further influencing factor was packet radio
networks [46] which caused protocol software to be integrated in radio sets.
The ’Idealized Software Radio’ as it was described by Mitola [8] is illustrated in Figure
2.1.
In the book by Kenington [32] it is shown that Figure 2.1, with the addition of the filtering needed to avoid aliazing in the resulting digital spectrum, is not a near-future realistic
1
SDR Forum was renamed to Wireless Innovation Forum in December 2009. The SDR Forum name is
used throughout Paper A in Part II, as this paper was filed prior to the name change
19
2. BACKGROUND
Software
A/D
A/D
D/A
D/A
F IGURE 2.1: The ’Idealized Software Radio’, simplified version of the figure in [8].
architecture when the receiver is to cover a range up to 2.2 GHz, to include standards such
as GSM, UMTS and mobile satellite communication. The main assumption made in this assessment is that the Analog-to-Digital converter (A/D) contains a sample-and-hold element,
as is typical for high-resolution A/Ds. An optimistic estimate of the A/D power consumption is made, considering only the power that goes into charging this sample-and-hold, and
neglecting other features of the A/D. A further assumption is that a dynamic range of approx. 122 dB (20 bits) is needed in order to be able to receive a weak signal in the presence
of other strong signals. The resulting power consumption is found as [32]
P =
kT (6n+1.76)/10
10
ts
(2.1)
where P is the power consumption of the Analog-to-Digital converter’s sample-and-hold, k
is Boltzmann’s constant= 1.38·10−23 , T is the device temperature in Kelvin, ts the sampling
interval and n the number of bits in the A/D. A resolution of 20 bits and the Nyquist sampling
rate of 4.4 Gsamples/second yields a power consumption of approximately 30 W, which
certainly is too high for a handheld device. Additionally, practically achieved A/D power
consumption is several orders of magnitude above this theoretical best-case number [32].
Theoretically it is possible to overcome the above power limits with technologies that
either do not rely on the charging up of the sample-and-hold element, or technologies using superconductive devices [32]. In reviewing typical currently available A/D-converters,
Figure 2.2, they all consume significantly more power than the above estimate. As for the
superconductive devices, these are still impractical to use as they need to be cooled to very
low temperatures.
Less demanding Mitola receivers, in terms of bandwidth and dynamic range, are practical to implement. As an example, a 3-30 MHz High Frequency (HF) SDR receiver of the
Mitola type, with 12 bit A/D, has been built [49].
While the ideal Mitola receiver may receive every frequency channel concurrently [49],
practical receivers for higher frequencies must sacrifice some of this flexibility to achieve
more practical requirements for A/D conversion and for the processing power. As an example of practical modification, switchable bandpass filters may be added prior to the A/D
in Figure 2.1 and used together with the undersampling technique [49] to relax the requirements on the A/D converter. Another example is one where it is the Intermediate Frequency
(IF) processing and onwards that is performed in software or reconfigurable digital hardware, see Figure 2.3.
20
2.1. A Brief Introduction to Software Defined Radio
1,00E+02
Power consumption
mption [W]]
1,00E+01
1,00E+00
12 Bits Analog Devices
16 Bits Analog Devices
1,00E-01
12 Bits Texas Instruments
16 Bits Texas Instruments
1,00E-02
12 Bits Optimistic Calculation
16 Bits Optimistic Calculation
1,00E-03
20 Bits Optimistic Calculation
1,00E-04
1,00E-05
1,00E+08
1,00E+09
Sampling frequency [Samples/sec]
1,00E+10
F IGURE 2.2: The optimistic power consumption estimates from Equation 2.1 at device temperature
of +80 deg C plotted for 12 bits, 16 bits and 20 bits resolution. The figure gives the listed (per
channel) power consumption of available 12 bits and 16 bits A/D converters with sample rates from
100 MegaSamples/second from Analog Devices [47] and Texas Instruments [48] (A/D data compiled
December 2009).
LNA
A/D
Bandselect
Software/
Reconfig.
Digital
processing
IF
Variable LO
F IGURE 2.3: Software / reconfigurable Intermediate Frequency (IF) processing receiver example.
LNA = Low Noise Amplifier. LO = Local Oscillator.
SDR has been driven partly by the continuous improvements of programmable processing circuitry such as DSPs and Field-Programmable Gate Arrays (FPGAs). An important
driver has also been the military’s vision to achieve interoperability between systems by
being able to easily port waveforms onto platforms and thus of having flexible, waveformupgradeable platforms for the Network Centric military tactical environment.
Many commercial-domain motivations also exist, including the increasing number of
waveform standards needed to be supported by cellular terminals and base stations [50],
the remote upgrade possibilities for satellite communicatations, the predicted future reconfigurable cellular networks and the future Cognitive Radio and Dynamic Spectrum Access
(DSA) nodes and networks.
While some manufacturers may want to maintain proprietary couplings between the software waveform application and the radio platform, a vision for SDR is a separation between
the two. Such a separation enables portability of waveform applications from platform to
platform. It also enables a business model where platforms and waveform software may be
made by different companies, and where customers may buy additional waveform software
21
2. BACKGROUND
from third parties. This may promote platform volume benefits through the standardization
of platforms with fewer and more specialized platform manufacturers. It also will lead to
more competition in waveform software design, where there may be true competition also
in waveform additions to existing platforms.
The separation of applications from its operating environment is also one of the goals of
the Software Communications Architecture (SCA). SCA was developed for and is managed
by the Joint Program Executive Office (JPEO) of the Joint Tactical Radio System (JTRS)
program. SCA is an architecture that enables SDR applications to be built as compositions
of components, and enables them to connect to platform devices. SCA defines interfaces for
managing and deploying the software components. The Software Communications Architecture also defines the domain profile, a set of Extensible Markup Language (XML) files,
which is used when deploying components, logical devices and services onto the system.
Paper A has a further description of SCA, and also dicusses to what extent SCA has
provided real platform-to-application separation and to what extent SCA-based applications
are portable.
While SDR is still a field of active research, it is also in a phase where SDR-implemented
radio systems are becoming available. Some military radio unit types meeting some of
the basic JTRS requirements [51] have been fielded [52], while production start for other
JTRS types are still in the future. In the civilian domain, commercial research platforms are
available [53, 54], as are also some cellular base stations [55, 56].
A further and more detailed introduction to Software Defined Radio is provided in Paper
A.
2.2
2.2.1
Introduction to Dynamic Spectrum Access
General
The dominating frequency assignment regime at present is one where the frequencies or
spectrum sub-bands are typically assigned for a significant amount of time. A frequency
assignment is defined as an ”Authorization given by an administration for a radio station to
use a radio frequency or radio frequency channel under specific conditions” [57]. In Norway,
as an example, such an assignment may last up to 20 years for some services [58].
This ’static’ frequency assignment regime in many cases leads to low efficiency in utilization of the spectrum. While most of the spectrum is fully assigned to users, measurements show that many frequency bands ’are not in use or are only used part of the time’ [17].
Even for urban areas such as Bern this is true, as shown by measurements in [18].
At the same time as very limited additional spectrum can be made available under this
static regime, the demand for communication capacity and hence additional spectrum, particularly for mobile two-way data communication is continuing to grow. In the military
domain, the information superiority and mobility demanded by the Network Centric Operations (NCO) doctrine causes additional spectrum demands for example for the full-motion
video from Unmanned Aerial Vehicles (UAVs) and for high-capacity ad hoc networks.
A number of more dynamic approaches to frequency assignment have been put forward
by a number of authors to improve the utilization of the spectrum resource.
The approach that is attracting most attention is Cognitive Radio. Cognitive Radio was
introduced by Joseph Mitola III and Gerald Q. Maguire Jr. [15] as an extension of software
22
2.2. Introduction to Dynamic Spectrum Access
radio with ’radio-domain model-based reasoning’. The ’intelligent agent’ [15] perspective
of the radios implies that systems of such radios are multiagent systems. Cognitive Radio was further detailed in Mitola’s dissertation [23] where one characterization is that of a
’trainable’ radio rather than just ’programmable’ [23], and where it is discussed as a widecontext environmentally aware radio. The Cognitive Radio is aware of both the user’s needs,
the present location, available networks and so on, and uses this knowledge to tailor both
radio resources and services [23]. The spectrum access part of Cognitive Radio has since
dominated in publications, to such an extent that many publications only refer to the spectrum access properties of Cognitive Radio. As an example, on page 27 of [18] it is stated that
’Cognitive Radios are radio systems that autonomously coordinate the usage of spectrum’.
One of the goals of this work is to analyze the advantages and disadvantages of different
Dynamic Spectrum Access architectures. To avoid the ambiguities in the Cognitive Radio
term in this context, it will be used to a lesser degree, and the terms ’Autonomous radio
node’, ’Distributed Coordination’ 2 and ’Centralized Coordination’ will be used instead, as
will be explained in the following.
Before discussing DSA architectures, spectrum sharing principles will be reviewed.
2.2.2
Spectrum Sharing Principles
A recommended categorization of spectrum sharing principles is found in the work by Zhao
and Sadler [60], as presented in Figure 2.4.
Dynamic Spectrum Access
Dynamic Exclusive Use
Model
Spectrum Property Rights
Open Sharing Model
Dynamic Spectrum
Allocation
Hierarchical Access Model
Spectrum Underlay
Spectrum Overlay
F IGURE 2.4: A categorization of Dynamic Spectrum Access sharing principles (taxonomy of Dynamic Spectrum Access [60]).
The Dynamic Exclusive Use model, Figure 2.4, has two categories, Spectrum Property
Rights (licensees sell and trade spectrum) and Dynamic Spectrum Allocation. In Dynamic
Spectrum Allocation, a chunk of spectrum is dynamically provided for a service, in a defined
region. Efficient exploitation of the spectrum chunk is left to the service provider. Restrictions must ensure that the use of the spectrum does not interfere with other services in other
regions.
In the Open Sharing model, the spectrum is shared between peer users on an equal basis.
In the Hierarchical Access model, the primary users are the spectrum owners that have
prioritized rights to the use of the spectrum. Secondary users may use the spectrum when
not interfering with the primary users. In what is referred to as an underlay mode, secondary
users may use the spectrum, for example with low power and a very wide bandwidth, when
2
In this thesis the term ’Distributed Coordination’ has been used to be consistent with e.g. the survey by
Akyildiz et al. [59]. An alternative term is ’Decentralized Coordination’.
23
2. BACKGROUND
the resulting interference level is neglectable and does not degrade the communication links
of the primary user. Ultra Wide Band (UWB), which is based on short-duration low-power
pulses, is an example of such underlay use. In the overlay mode, spectrum opportunities
in the form of vacant combinations of time and frequency slots in a geographical area are
identified by the secondary users. The secondary user uses the spectrum opportunity until
the primary user re-enters, at which time the secondary user needs to move to a new spectrum
opportunity.
As is natural since DSA is still an area where the terminology is young, some parallel
terms exist. Hierarchical Access is also referred to as ’Vertical Spectrum Sharing’ [18].
Another term, ’Horizontal Spectrum Sharing’ refers to ’sharing between radio systems with
a similar regulatory priority’ [18], which is similar to Open Sharing.
The dynamic sharing of spectrum has a clear parallel in the Medium Access Control
(MAC) protocols [59], that are responsible for the access to the shared medium of existing
wireless systems. Channel-based medium access may use combinations of Frequency Division Multiple Access (FDMA), Time Division Multiple Access (TDMA) and Code Division
Multiple Access (CDMA) [10]. Space Division Multiple Access (SDMA) such as through
beamforming antennas or MIMO technologies, may also be included, as may Polarization
Division Multiple Access (PDMA). Packet-based medium access typically use TDMA, with
the 1971 ALOHA protocol as the first example [10].
Dynamic spectrum sharing, however, has differences from and has challenges that extend
beyond those of these MAC protocols [59]. In Hierarchical Access, a difference is that
the spectrum sharing needs to take into account various types of licensed users. In both
Hierarchical Access and Open Sharing a challenge is that many different types of wireless
systems may need to coexist, rather than just radio nodes from a single radio system with a
single MAC protocol. Typically DSA also aims to share wider spectrum ranges.
In the enclosed paper C, all of the three models Dynamic Spectrum Allocation, Hierarchical Access and Open Sharing will be relevant. In Papers D and E, an Open Sharing model
is assumed.
2.2.3
Architectures
Different DSA approaches have been suggested in the literature. An architectural classification of these approaches is shown in Figure 2.5.
Dynamic Spectrum Access
Coordinated
Centralized Coordination
Autonomous
Distributed Coordination
F IGURE 2.5: Dynamic Spectrum Access Architectures.
The coordinated architectures are relevant for all of the access sharing principles in Figure 2.4. The Autonomous architecture is applicable for the Open Sharing and Hierarchical
Access models.
24
2.2. Introduction to Dynamic Spectrum Access
In Autonomous architecture, each radio node, or a co-working group of such nodes,
autonomously decides on the use of spectrum based on spectral sensing and is restricted and
governed by predefined policies. This is often referred to as Opportunistic Spectrum Access.
A Cognitive Radio which does not have functionality that coordinates its spectrum decisions
with other spectrum users or with any spectrum management entity, belongs to this category.
In a Centralized Coordination architecture, the coordination is enabled through specific
infrastructure, for example in the form of spectrum servers that provide spectrum leases to
radio nodes. Administrative communication channels are required between the infrastructure decision elements and the radio nodes. This could be in the form of direct wireless
connections, in the form of wireless Internet or in the form of fixed Internet links.
In a Distributed Coordination architecture, the coordination is between the radio nodes
themselves. This has the advantage of avoiding the dependency on additional spectrum
decision infrastructure. Administrative communication channels are required between the
nodes, either as direct wireless connections or Internet ones as in the previous case.
Paper C outlines a concept that belongs to the Centralized category. Papers D and E
discuss all of the three architectures.
2.2.4
Transmitter- and Receiver-centric Models
Another view of dynamic spectrum sharing is a division into transmitter-centric sharing and
receiver-centric sharing.
Transmitter-centric sharing is the more conservative approach. Here the spectrum used
by a transmitter is considered occupied in a given region around the transmitter even if there
are no receivers in (parts of) this region. This occupancy region is defined as the region
where the transmitter field strength is above some defined minimum level.
In receiver-centric sharing, decisions about whether a spectrum interval may be reused
takes into account the accumulated interference at all relevant receivers. Examples of acceptance criteria may be minimum signal-to-noise-and-interference at the influenced receivers,
or minimum communication bit rates of influenced communication links. Receiver-centric
sharing may permit a denser reuse of spectrum objects, since regions where there are no
receivers may permit spectrum reuse even if they are within transmitter-centric occupancy
regions.
For the centralized concept outlined in Paper C, both transmitter-centric and receivercentric sharing will be relevant. In Papers D and E receiver-centric sharing is assumed.
25
Chapter 3
Related Work
3.1
Related Work to the Results from Research Goal 1,
Conceptual Challenges of Software Defined Radio
The main results from research goal 1 are in Paper A. In addition to the author’s own experience with SDR, Paper A is built on a number of references, as listed in the paper. These
include foundational papers by Mitola [8] and others, Tuttlebee’s book on enabling technology for Software Defined Radio [21], Kenington’s book [32] as well as a number of classical
and recent papers.
The literature on SDR is vast, and hence a review of all the work that is related in
some way to Paper A will certainly lead too far. The review is therefore restricted to the
most important other articles of a tutorial or survey type that deal with challenges and/or
opportunities of SDR:
Naturally such a review needs to start with Mitola, who named and in some ways initiated the research field with his ’Software Radios Survey, Critical Evaluation and Future
Directions’ 1992 article [8]. This visionary work suggested and predicted many groundbreaking features of SDR, such as open software architectures, meta-level models and radio
simulation and Computer-Aided Design environments that permitted integrated work from
many different disciplines. It also reviewed the analog-to-digital conversion and processing
resources and requirements of that time. This work was followed up by a number of articles
in the years to come, many of which further elaborated on software architecture for SDRs,
and in the 1999 article [61] promoted a mathematical perspective of SDR. What Paper A
possibly has to offer in a complementary way to these very visionary and forward-looking
papers, is an update with recent progress in the field, as well as putting the practical challenges with SDR (such as portability, security, regulations), and the opportunities, clearly on
the table.
Jondral has written an excellent (2005) tutorial [62] which covers both Software Defined Radio and Cognitive Radio, with a focus on reconfigurability, mobile communication
standards and the Software Communications Architecture. It also goes in detail into SDR
receiver front-ends, which is not a topic in Paper A. However, Jondral’s tutorial does not
sum up remaining SDR challenges and it does not go into business models and product
opportunities, regulations, security or alternative SDR architectures as does Paper A.
Several of the tutorials and surveys have subtopics that overlap with Paper A, but deal
only with one or a few subtopics of SDR. Shamsizadeh [63] deals in a thorough way with
27
3. R ELATED W ORK
security issues in Software Defined Radio and Cognitive Radio, with a particular focus on
security threats. Giacomoni and Sicker (2005) [64] go into great detail about how the transition from ’static’ radio systems to flexible software defined ones challenge the certification
and assurance processes. Lehr et al. [65] go into business and regulations, discussing implications of SDR on the wireless value chain, wireless services and public policy. González
et al. [66] have provided a very thorough tutorial on the Software Communciations Architecture, which also promotes the Open-Source SCA Implementation Embedded (OSSIE).
Zhang et al. [67] convey insights into SCA-compliant waveform development.
Some of the tutorials and surveys have a different approach and view SDR from a different angle than Paper A, but on the other hand do not provide the same broad view of issues
and benefits: Cummings 2007 tutorial [68] (only an abstract is available on IEEE Xplore)
contains an interesting historical view on the path from wireless communications in 1890 via
’hardware radios’ through to SDR. Tribble’s 2008 [69] ’... Fact and Fiction’ article discusses
some of the postulated benefits of SDR: The reduced logistics cost, the possibilities for rapid
capability upgrades and SDR as an enabler for Network Centric Operations (NCO). Pearson
(2001) [70] emphasizes the system engineering challenges resulting from SDR, including
requirements management and the complex and evolving design as well as verification. He
also emphasizes the opportunities by ’spiral’ (i.e. incremental) design and user interaction
during the development process.
Several publications discuss the front-end and analog-to-digital conversion sub-topics,
and that are not within the scope of research goal 1 and Paper A: Included in this group is
Svensson and Andersson’s ’... Visions, Challenges and Solutions’ [71] that focuses on frontends, A/D converter and filtering issues. Rusu and Ismail [72] deal with A/D converters in
SDR. Haghighat [73] focuses on hardware aspects of SDR, including antenna technology,
radio frequency modules, conversion between the digital and analog domains, digital signal
processing and interconnection technology.
Although, as evident from this review of related papers, there are a number of tutorial and
survey papers on SDR, there is not any updated paper that summarizes the challenges and
opportunities over the same width of SDR aspects as Paper A. It should also be mentioned
that Paper A is the only SDR tutorial/survey paper in the IEEE Communications Survey and
Tutorials online journal.
3.2
Related Work to the Results from Research Goal 2,
SCA Workload Overhead
In contrast to the first research goal and Paper A where there were many related works in
the literature, there are very few papers discussing workload overhead on CORBA-enabled
processors in SCA. This fact is also confirmed in a paper by the Communications Research
Centre Canada’s SCARI development team [74] where it is explicitly stated that ”The performance of CORBA in the context of SCA is nearly absent from the research community
literature.” It should be added though, that in addition to the research on CORBA in the
context of SCA, there is also earlier research on the performance of CORBA by itself, some
of which is mentioned in Paper A.
In addition to the few papers discussing specifically workload overhead on CORBAenabled processors in SCA, as in Paper B, there are a few more papers going into inter28
3.2. Related Work to the Results from Research Goal 2, SCA Workload Overhead
component latency and/or latency variation and/or data throughput. After reviewing the
specific workload overhead research, this second group of papers will also be commented
on. The reason for also including these papers in this review of related research, is that
latency, throughput and processor workload are in most cases dependent parameters. E.g. if
the latency is caused by data marshalling inside the processor workspace, the latency at the
same time gives rise to a workload overhead.
The review is divided into research that was published prior to Paper B, and research that
was published at a later time than Paper B.
3.2.1
CORBA in SCA: Workload Overhead
3.2.1.1
Published prior to Paper B
The paper that is most directly related to Paper B and was published prior to it, is one by Balister et al. [75] which similarly investigates inter-component communication using OSSIE
and omniORB. The waveform application that is used in this case is a different one, an FM
waveform application. Similar to Paper B, the statistical sampling profiler OProfile is used to
aquire processor workload results. The analysis is in some ways less comprehensive than the
one in Paper B: The paper does not vary the granularity of the applications as does Paper B,
but reports profiling results for only a single 3-component application configuration. Also,
it does not vary the inter-component packet sizes and packet rates. Further, it does not propose any models for the overhead as does Paper B. However, a breakdown of the processor
workload is included. With the specific waveform application and application granularity
used, Balister et al. reaches a somewhat different conclusion than Paper B and the paper
from Singh et al. [76] (see below), the paper concludes that the processing overhead from
CORBA was significantly lower than the required signal processing of the tested waveform.
A conclusion that is consistent with Paper B is that the copying of data into CORBA sequences in the application components, an operation required by the StandardInterfaces in
OSSIE, creates a certain amount of overhead.
The coauthor of Paper B, Jon Neset, presented some initial SCA-based application workload measurements in a University project thesis [77] in 2007.
3.2.1.2
Published later than Paper B
The paper that is the closest related to Paper B is the one by Singh et al. [76], also having
been written in the context of work in the RTO IST-080 SDR group, and that was released a
few months later than Paper B. As in paper B, a Stanag 4285 waveform application is used to
investigate the effects of SCA application granularity. The 4285 transmitter is split into two
different granularities, 2 components and 4 components. Compared to Paper B, a different
SCA Core Framework (SCARI) and ORB (TAO) is used. However, as in Paper B it is shown
that the ORB and SCA give significant overheads relative to the functional processing of
the components, and it is concluded that the components should perform significant signal
processing in order to avoid being dominated by the overheads. The paper has a useful
breakdown of the overheads from the ORB, SCA and libraries.
The profiler used in this paper, Valgrind [78], is significantly different from the ones used
in Paper B. Valgrind instruments executable files by running them on a synthetic Central
Processing Unit (CPU). It is capable of profiling the activity of the executable, and also
29
3. R ELATED W ORK
all dynamically linked-in libraries. It does this very accurately, and can provide answers
in the number of clock cycles on the specific processor. The downsides of this approach
are, however, that the executables run slower than real time, and also that the profiling is
limited to the executable and the dynamically linked in libraries, it is not system wide.
Further differences relative to Paper B are that [76] does not investigate high application
granularities and different packet sizes, does not attempt to formulate any models for the
workload overhead, and does not investigate the contribution of context switches.
In a paper by Abgrall et al. [79], both CPU load and latency are studied and compared
between the SCA-based OSSIE environment (which is also used in Paper B) and GNU radio, which defines a single-processor non-CORBA architecture. Somewhat in line with the
conclusions in Paper B, the CPU load is found to be five times higher for the OSSIE system
than for GNU radio, when running the same waveform functionality. An FM receiver waveform is used for the comparison, using six different waveform granularities, the maximum
being nine components and the minimum three components. With OSSIE, as in Paper B a
dependency on granularity is observed, with the three component version using 22% CPU
and the nine component version using 36%. The non-signal-processing CPU load is also
discussed, conluding that there should be significant processing inside an SCA-based component, which is the same conclusion as made in Paper B. Memory use of the waveform
implementations in OSSIE, as well as intercomponent latencies, are also discussed, and
which is outside the scope of Paper B. The paper has, later than and independently of Paper
B, reached some of the same conclusions. Abgrall et al. do not, however, propose models
for the CPU overhead caused by SCA/CORBA, and do not go into the relation between the
CPU load caused be the inter-component communication itself versus the context-switching,
as does Paper B.
The review of related research clearly shows that there are few related papers to Paper
B, and that there are none that do all of the investigations and to the same extent as in Paper
B. Paper B hence clearly fills a space in the research literature.
3.2.2
CORBA in SCA: Latency and/or Throughput
The review of the latency and throughput research is included here for completeness as latency, throughput and workload overhead are in many cases related. Also, the added latency
and throughput reduction due to CORBA affect SCA-based systems in the same way as
workload overhead, i.e. as a drawback and an argument against SCA.
3.2.2.1
Published prior to Paper B
In research by Bertrand et al. in 2002 [80], latency is compared between the SCA-defined
packet transfer, transfer by local (same-address-space) method invocations and also transfer
by CORBA remote one-way and two-way methods. The authors find that 256-byte packets
transferred by the SCA-defined pushPacket method use 1300 µsecs on average, compared to
a local call using just 9 µsecs. The paper still concludes that, given the advantages provided
by the SCA in other areas, the latencies are still acceptable.
30
3.2. Related Work to the Results from Research Goal 2, SCA Workload Overhead
3.2.2.2
Published later than Paper B
Using OSSIE and omniORB, Tsou et al. [81] make an interesting comparison of latency on
a single-processor system using Unix domain sockets as CORBA transport, relative to using
the default TCP/IP. Roughly a factor of two reduction of latency is found when using Unix
domain sockets. By comparison, Paper B employs the default TCP/IP endpoint settings
of omniORB. The paper [81] does not provide processing overhead numbers for the two
different cases.
Navarro et al. [82] compare throughput performance in the OSSIE and GNU Radio
frameworks, as in [79]. The comparison used the same base waveform application, a binary
phase shift keying modulation waveform, adapted to each of the frameworks, in both cases.
It is concluded that the maximum throughput in both cases was surprisingly low, around
700 kbps, the performance in OSSIE being better than in GNU radio by a small margin.
The paper, however, does not contain any analysis of why the throughput was surprisingly
low. In particular, the paper does not contain any profiling of the OSSIE-implemented application, and does not contain any CORBA-related investigations. The conclusion that the
OSSIE-implemented application was only narrowly better is somewhat contradictory to [79]
and raises some questions.
In Bernier et al. (2009) [74], round-trip measurements have been made of buffers of data
between a PerformanceApp SCA resource, a PassThrough resource and back to the PerformanceApp. Here, measurements for four data packet (buffer) sizes were made: 1024 and
2048 octets, as well as 1024 and 2048 double-precision (64-bit) numbers. The measurements
were done for both an Intel/Linux system and a PowerPC/Integrity system. While measurements were done for both systems using TCP/IP transport, for the PowerPC/Integrity system
a different transport named INTCONN was also tested, as well as bypassing the transport
by running in a single address space. The article demonstrates round-trip-time performance
dependencies on the underlying transport, where the INTCONN is shown to provide better
performance than TCP/IP. The SCARI development tool that was used in the experiment
had the possibility to automatically generate a ResourceFactory that enabled it to run the
SCA application Resources in a single address space. Also, the ORB used was able to recognize this and substitute remote invocations with local ones. Running the SCA Resources
in a single address space in this way enabled even better latency performance. These results are very relevant in terms of possible workarounds of the issues that are observed in
Paper B, and further details as well as advantages and disadvantages of this procedure will
be discussed further in Section 4.2.2.
Abgrall et al. (2010) [83] measure latency and latency distributions. They make comparisons between a single-processor OSSIE system and a single-processor system that in
the same way as the OSSIE system is using omniORB to send the same data between the
components but that does not include the OSSIE Core Framework. The paper concludes that
the latency with and without the OSSIE Core Framework are very similar, and that hence
the OSSIE framework does not add significant latency beyond that introduced by omniORB.
The paper leaves some uncertainty, however, due to insufficient details being provided for
the reader to know that the comparison is done with the exact same set of parameters. The
paper interestingly proposes closed-form models for the latency-distributions and demonstrates how they fit the measured data.
31
3. R ELATED W ORK
3.3
Related Work to Research Goal 3, DSA Architectures
and Algorithms
3.3.1
Work Related to the Dynamic Frequency Broker
3.3.1.1
Timeline
The Dynamic Frequency Broker (DFB) concept, that is presented in Paper C in Part II,
belongs to a group of ’spectrum broker’ concepts, and is classified as a Centralized Coordination DSA architecture. Paper C was written in 2007 and presented in 2008.
The spectrum broker term in itself is not very new. It can be traced back at least to being
mentioned in an article on spectrum management tools in 1990 [84], to presentations and
publications from the European ’Drive’ project in 2000-2002 [85, 86], and there is a patent
application that was filed on a particular broker system in 2002 (patent dated 2008) [87].
In ’Drive’, the spectrum broker was used for Dynamic Spectrum Allocation, i.e. to allocate
chunks of spectrum to different radio networks. A frequently cited concept, and possibly
the first to be thoroughly described in an open publication on a detailed conceptual level,
is DIMSUMnet [88] in 2005. This was followed by DSAP [89] and a few others [90, 91]
the same year. In the following years, central contributions have come from followup articles involving one or more of the DIMSUMnet authors, providing spectrum measurements
in existing cellular networks to demonstrate the usefulness of broker solutions [92] and
providing approximate algorithms for broker spectrum assignment [93, 94]. Approximate
algorithms have also been suggested by Kovács et al. [95, 96] by splitting the spectrum assignment problem into a spatial and a temporal part. Contracted by the British regulator
Ofcom, Roke Manor Research has prepared a thorough description of the Ofcom Candidate
Architecture [97] that appeared in 2007.
The spectrum broker is one possible mechanism that may enable an efficient market for
spectrum. Accordingly there are a number of related papers that discuss spectrum auctions,
these are skipped in this overview, however, as this topic is not central in Paper C.
The spectrum broker and centralized coordination research area still generates activity,
with several contributions made also in 2009. An example is Ge et al. [98] that describe a
laboratory prototype broker that gathers spectrum information through cooperative sensing,
presents a table of vacant channels to secondary users and may send alarms to secondary
users in case primary users need them to vacate the spectrum.
A current trend is to use the centralized element as a spectrum database but not a decision maker, and allow the decisions as to which part of the spectrum to use, to be made
decentralized instead of in the centralized element, this is discussed further in 4.3.1.4.
3.3.1.2
Three Important Concepts
The most important conceptual contributions in this author’s view, and the ones cited in
in Paper C and/or Paper E, are DSAP [89], DIMSUMnet [88] and the Ofcom Candidate
Architecture [97]. These will be briefly described here as background for the DFB. For the
description of DFB, refer to Paper C and the summary in Section 4.3.1.1.
Dynamic Spectrum Access Protocol (DSAP) [89] is a concept where spectrum is managed through a time limited lease management mechanism. The concept is aimed at brokering in limited geographic areas, at small timescales and on a per LAN basis. The archi32
3.3. Related Work to Research Goal 3, DSA Architectures and Algorithms
tectural elements of DSAP [89], see Figure 3.1, are the DSAP server, the DSAP clients and
the DSAP relay. The client is defined as any wireless device that uses DSAP for spectrum
coordination. The server takes into account current assignments and grants leases. The relay allows multihop contact between clients and server. DSAP contains a description of the
messages used between the different architectural elements.
Ch
an
ne
ne
l le
go
as
tia
e
tio
n
database that hol
bly including geo
B
?
throughout the ne
?
odic updates from
se
lea
C
el
n
D
an
in their vicinity a
A
n
Ch
tio
tia
o
g
Radio Map
This information
1
ne
+
h1
Policy DB
bC
1
1
optimal spectrum
2.
80
DSAP Server
DSAP relay
accordingly.
E
The RadioMap
F
low the server t
ment under “poli
F IGURE 3.1: The architectural elements in DSAP are a DSAP server, DSAP relays and DSAP clients.
administrator-defi
Fig. from
1. Components
of DSAP.
c
Reprinted from [89] with permission
IEEE, IEEE.
tribution of lease
quality of service
The DFB has similarities to DSAP, including the lease-based mechanism, the loweridentifiers
level
(MAC
of
network
nodes,
takes
on
the
role
of
the
spectrum
arbitrator.
broker in DFB is similar to the DSAP server, the DSAP clients to DFB radio nodes, and the
Thetoserver
stores
information
about
its DFB
clients
channel
C. isDSAP Messag
DSAP relay
DFB slave
stations.
A difference
between
andand
DSAP
is that DSAP
aimed at conditions
local brokering
with the clients
wireless
whilethat
DFBwe
is aimed
throughout
the being
network
in adevices,
database
call at a full
A ChannelDisc
hierarchical
computerizedBased
replacement
of traditional
frequency
allocation and the
assignment,
a RadioMap.
on ongoing
client
communications,
includingset
both
scale and large scale policies
brokering.and
Here
clients can bethe
both that
radio wish to obt
ofsmall
administrator-defined
thetheRadioMap,
The parameters o
devices, aggregations of devices or radio infrastructure entities. DFB additionally includes
DSAP
server
determines
an
“optimal”
distribution
of
radio
identifier,
locatio
ideas concerning the use of Web Services for communication and universal discovery
of
among
the clients
the network
and reconfigures
brokers. spectrum
DSAP on the
other hand,
includesin
a definition
of protocol
messages, which is
not
ported wireless M
clients accordingly. We envision DSAP as a very dynamic the desired lease
includedthe
in DFB.
protocol:[88]some
configuration
of network
nodes
DIMSUMnet
also manages
spectrumparameters
through a time
limited lease
management
ChannelOffer m
mechanism.
There
are
two
modes
that
are
described,
in
the
one
mode
it
is
the network
may be reconfigured several times a second, while others may
client either in re
operatorsremain
that request
spectrum.for
In extended
the second periods
mode the of
endtime.
user clients request spectrum
unchanged
quest (described
for communication with other endusers or base stations.
choice of
A. Protocolarchitecture
Entities has the architectural elements as illustrated in Figureserver’s
The DIMSUMnet
3.2.
The Spectrum Information and Management (SPIM) broker provides spectrum leases, from
has a what the cli
DSAP defines the following entities (see Figure 1):
complete topographical map and maintains a ’spectrum snapshot’ of the spectrum use. The
ChannelReques
DSAP
client:
any (RANMAN)
wireless device
usesleases
DSAP
for DIMRadio Access
Network
Manager
controls that
spectrum
for several
parameters and is
coordinated
Before
communicating
a DSAP from
SUM base
stations. In spectrum
mode 1 the access.
clients listen
to spectrum
snapshot broadcasts
the
edge the terms o
clientbase
will
request
a channel
from theinDSAP
DIMSUMnet
stations
and configure
themselves,
terms ofserver.
services, network operators
renegotiate certai
and spectrum
and waveform
parameters,
accordingly.
mode
2, the clientspeccan additionDSAP
server: the
centralized
entity In
that
coordinates
renew
ally request
spectrum
optionallyItmay
send information
sensed spectrum
the a lease.
trum
accessand
requests.
accepts
spectrum about
leasethe
requests
from to
DIMSUMnet
baseconsiders
stations. DIMSUMnet
as an
option describes
a hierarchy
of SPIM
ChannelACK i
clients,
the current
spectrum
assignments,
the
Ra- servers
when there are many cells in a region, and additionally describes redundancy in servers.
Request. This me
dioMap and the policy database and responds back with a
time-bound spectrum allocation.
DSAP relay: an entity that allows multi-hop communication
between DSAP server and clients that are not in direct range
of each other.
request for a leas
33
ChannelReclai
fully reassign or
message can be p
3. R ELATED W ORK
SPIM
Spectrum
Broker
Internet
DIMSUMnet: Dynamic Intelligent
Management of Spectrum for
Ubiquitous Mobile networks
DIMSUM
IP Core
Network
dio tech
operatin
mation c
where se
sible for
DIMSUM
RANMAN
SPEL
Protocol
3.2 R
DIMSUM
IP BS
DIMSUM
IP BS
DIMSUM
IP BS
MN
MN
MN
F IGURE 3.2: The architectural elements in DIMSUMnet are the SPIM spectrum broker, the RANMAN, DIMSUM base stations and DIMSUM clients. Reprinted from [88] with permission from
c
IEEE, IEEE.
Figure 6. DIMSUMnet architecture
DFB is different
from the
architecturein
in that
DIMSUMnet
is centered
terested
in using
theDIMSUMnet
CAB spectrum
a given
geographiaround radio clients in access and mesh networks in its description, while DFB does not
cal region Ê. It manages all dimensions of the CAB specmake any particular assumptions on the type of radio clients. DFB also describes a full
trum, and
namely
frequency,
time, spaceand(location,
direction),
hierarchy,
Web Services
oriented communication
discovery mechanisms.
DIMSUMnet
on
the
other
hand
has
more
architectural
elements
and
mechanisms
than
DFB
signal (polarization, coding/modulation) and power. The ba-and
is described in more detail than DFB.
sic
tenet of DIMSUMnet spectrum usage is that any use
The Ofcom DSA Candidate Architecture, see Figure 3.3, has slightly different archiof CAB
spectrum
notmanagement
approved
by SPIM
isDIMSUMnet
a nontectural
elements
and spectrum
principles
comparedserver
to DSAP,
and
also to DFB.usage.
Rather than using the broader spectrum lease term, the DSA Service
compliant
Provider manages call requests. The end users, here termed DSA Capable User EquipThe SPIM
maintains
a complete
map
ment (DCUE),
receiveserver
price quotes
per call through
the Price topographical
Quote Generator at the
DSA
Network
of theProvider.
region Ê which records position of all base stations
A clear difference to DFB is that the Ofcom DSA Candidate Architecture is focused on
and approximate extent of cell coverage associated with each
calls through networks, with the DSA Network Provider as a central element, whereas DFB
BS.at allFor
every
cell, it maintains
a and
spectrum
[10]
aims
wireless
communication,
including ad hoc
push-to-talksnapshot
type communication.
The
Ofcom
DSA Candidate
Architecture used
has alsofor
moreSPI
focuschannels
on the business
side(2)
of specthat
records:
(1) spectrum
and
a
trum management including billing and price quotes, as DFB does not address these areas.
spectrum allocation map (SAM). Each SAM entry records
The architecture does not describe a hierarchy of brokers or the use of Web Services for
spectrum and
parameters
as (a) start and end of spectrum
communication
discovery, as such
does DFB.
band, (b) network service provider (NSP) it is allocated to,
3.3.2
Related
Work,
Comparison
Dynamicaccess
Spectrum
Access (e.g:
(c) the
current
waveform
or ofnetwork
method
GSM)Architectures
used in the spectrum, (d) time duration of lease, (e)
This
section describes
work related power
to the comparison
of and
DSA architectures,
presented
maximum
transmission
allowed,
(f) optionally,
in- in
Paper
E in Part II.
While interference
there are many articles
thatthermal
touch on DSA
architectures
in one
formation
about
due to
noise
and other
34sources such as secondary users. The SAM changes with
time as the various parameters in SAM entries change. For
example, when the SPIM server de-allocates a spectrum, the
The
new net
several
characte
and soft
dio freq
form (C
ties, ma
characte
power e
tion cha
for serv
quired t
spectrum
stations
specific
access c
minates
sons suc
band, an
3.3 D
2
DSA NETWORK ARCHITECTURE
2.1
INTRODUCTION
The candidate architecture for DSA is shown in Figure 1. The items shown in blue are the
key network elements. The Billing Agents, shown in red, are also required although the
protocol associated with billing has not been a major focus for this project.
3.3. Related Work to Research Goal 3, DSA Architectures and Algorithms
Billing
Agent
Billing
Agent
PQG-Proxy
Price Quote
Generator
(PQG)
User
Interface
Internet
User Agent
(PRSA)
Air
DCUE
DCIE
Location
Register
(DCUE-LR)
Interface
Service
Gateway
DSA Network Provider
DCUE - DSA Capable User Equipment
Address
Register
(DCIE-AR)
DSA Service
Provider
DCIE - DSA Capable Infrastructure Equipment
Figure 1: DSA Candidate Architecture
F IGURE 3.3: The main architectural
elements in the Ofcom DSA Candidate Architecture [97] are
DSA is best described as a ‘network of networks’ since a number of participating access
the DSA Capable
User Equipments (DCUE), the DSA Capable Infrastructure Equipment (DCIEs) at
networks operated by DSA Network Providers (DNPs) are affiliated to an overlay or
‘meta-‘ network
by and
a DSA
Service
Provider (DSP).
DSA concept
the network provider,
agents operated
for billing
pricing
of spectrum
and theTheregistries
at theis DSA Service
technology agnostic so that the access technologies of DNPs can range from, say, cellular
Providers. Reprinted
from
[97]
with
permission
from
Roke
Manor
Research
Ltd.
through WiFi to Private Mobile Radio (PMR) systems. DNPs operate DSA Capable
Infrastructure Equipment (DCIEs), for example, cellular base stations or WiFi Access
Points. In fact, the DSA architecture has been designed so that only minor modifications are
required to any legacy network choosing to participate in DSA. The common denominator is
that the DNP is willing and able to support the roaming onto its network of DSA Capable
User Equipments (DCUEs). The Internet is used to provide the inter-connect between
DNPs and DSPs as well as access to services for End Users.
way or another, there are few that make a systematic comparison of autonomous, distributed
(decentralized) and centralized architectures. Also, the topic is one where the discussion has
not ended, and
is still
active
research
each
thedescribed.
directions.
Thewhere
role andthere
functionality
of each
network
element in
within
DSAof
is now
The review here will center on the most directly related publications, those that directly
compare DSA architectures, prior to and after Paper E was published.
An inspiration for paper E was the ’Architectures and Implications’ article from Tran
[99]. This draws
the architectural lines from manual static spectrum access
through cen72/07/R/028/U
Page 9 of 30
tralized dynamic access and to opportunistic dynamic access. The article presents a good
overview of vulnerabilities in the opportunistic and centrally coordinated architecture, also
in hostile environments. It does not include distributed coordination as a specific case. The
DSA comparison in Paper E expands on this paper by including the distributed coordination architecture and by also relating the architectural comparison to a model and presenting
simulation results.
Nekovee in two similar articles [100, 101] discusses the three types of architectures,
associated research challenges and reviews examples from the research literature. These do
not provide quantification, however, nor go into specifics for hostile environments.
Chen et al. [102] also review the three types of architectures by going through a number
of example concepts from the literature. There are also several DSA surveys that briefly
discuss the three forms of architectures [103–105]. A publication from the European IST
Winner Project [106] has a one-page comparison of centralized and distributed architectures,
but the discussion is limited to the particular spectrum management system proposed in
Winner.
The article that by its title is closest to Paper E is one that appeared after Paper E was
published (it appeared in IEEE Early Access recently in 2010, and was accepted for a future
issue of IEEE Communications Surveys & Tutorials). It is entitled ’A comparison Between
the Centralized and Distributed Approaches for Spectrum Management’ [107]. The focus
Use, duplication or disclosure of data contained on this sheet is subject to the restrictions on the title page of this document
35
3. R ELATED W ORK
is different from Paper E, though, with its main line of comparison being between Dynamic Spectrum Allocation (i.e. the allocation of spectrum chunks to network operators)
and what the authors term Dynamic Spectrum Selection (Opportunistic Spectrum Access /
Dynamic Spectrum Access). It compares distributed (ad hoc) Dynamic Spectrum Selection
and network-assisted Dynamic Spectrum Selection. The paper is comprehensive, however,
and reviews many examples.
Unlike the above publications, Paper E mainly bases the comparison of the architectures
on a link-interference model and provides simulation results that serve to illustrate the differences. Compared to all other papers but the one by Tran [99], Paper E extends the discussion
by also including hostile environments.
3.3.3
Related Work to Algorithms Based on the Iterative Waterfilling
(Paper D and Paper E)
The starting point for the work on algorithms and calculation models based on the Iterative
Waterfilling has been the comprehensive and very frequently cited Cognitive Radio article
by Haykin [108], as well as the doctoral dissertation by Yu (2002) [24] and the textbook by
Barry et al. [109].
Haykin’s article is a broad tutorial on a number of aspects of cognitive radio, including
spectrum sensing and interference, channel state-estimation and cooperation and competition. It highlights two types of spectrum decision approaches for cognitive networks, one
being game-theoretic learning exemplified by ’No-regret algorithms’, the other one being
the Iterative Waterfilling approach. The article formulates the achievable user-rates for a
multilink scenario, and introduces the signal-to-noise-ratio gap factor that is also used in
Papers D and E. The Iterative Waterfilling algorithm is explained for a multiuser scenario,
and a small example is illustrated. The performance maximization of each transceiver is
subject to a constraint that a ’interference temperature limit’ is not exceeded. The concept
of ’interference temperature limit’ has since become less relevant and it is not included in
Papers D and E. Paper D expands on subtopics in this paper by studying the Iterative Waterfilling solutions relative to centralized optimum ones, by case-studying the concept of a
priori target rates that is briefly mentioned in [108], and by making suggestions about what
to do when the deployments are such that the Iterative Waterfilling does not provide good
solutions for the radio links. Paper E goes deeper by investigating random multilink deployments and policy-based autonomous modifications as well as distributed interaction-based
modifications.
Haykin [108] cites Yu [24] as the source for the ’competitive optimality’, i.e. the Iterative
Waterfilling converged solution. As part of this work, Yu outlines the Iterative Waterfilling
algorithm for an interference channel in a Digital Subscriber Line (DSL) environment. This
work has been the reference for the design of the Iterative Waterfilling part of the Matlab
calculation code for Papers D and E.
According to Google Scholar [28], the Iterative Waterfilling term first appeared in a 2001
work by Yu et al [110]. However, the basic Waterfilling solution by itself, the well-known
optimal power allocation for fixed interference power, is not of new origin. It can be traced
back to being illustrated in a 1964 work by Holsinger [111] which further references a work
by Fano in 1961.
36
3.3. Related Work to Research Goal 3, DSA Architectures and Algorithms
In the review of related work, no papers have been found that present the same results
as in Papers D and E, and in particular no paper has been found that presents the distributed
interaction algorithm in Paper E.
There are several related papers, however, that discuss a wide range of aspects of the
Gaussian interference channels and the Iterative Waterfilling, of which the following papers are highlighted here as particularly relevant: Hayashi and Luo [112] discuss for which
crosstalk levels Frequency Division Multiple Access (FDMA) is optimal, relative to sharing
the frequency. Shum et al. [113] provide general convergence criteria for the Iterative Waterfilling. Zhang et al. in a very recent publication [114] provide a broad overview of convex
optimization in Cognitive Radio Networks.
A candidate waveform for employing the Waterfilling or modified versions of waterfilling principles, with a segmented frequency model as in Papers D and E, is the NonContiguous Orthogonal Frequency Division Multiplexing (NC-OFDM) [115], in which the
individual OFDM carriers may be set up with different power levels and bit rates, or turned
completely off, as required to accomodate the electromagnetic environment.
37
Chapter 4
Results and Implications
4.1
4.1.1
Research Goal 1, Software Defined Radio Challenges
Paper A: Software Defined Radio: Challenges and
Opportunities
Tore Ulversøy, ”Software Defined Radio: Challenges and Opportunities,” IEEE Communications Surveys & Tutorials, Vol. 12, No. 4, 2010
4.1.1.1
Summary of Contributions
The scientific contribution of this paper consists of the comprehensive summary of challenges and opportunities, the assessment of the state of SDR, the projections for the way
ahead and the suggestions for improvements.
The following is a walk-through of the sections of the paper, highlighting the main conclusions and providing supplementary comments:
SDR and the Software Communications Architecture
This section in the paper provides a summary of the Software Communications Architecture, and serves as background information for the ’SW Architectural Challenges’ section.
SW Architectural Challenges
This section summarizes architectural challenges, specifically in these areas:
• Portability of SCA-based applications: While SCA has made portability easier
through defining an environment and a protocol for running and controlling SCAbased applications, and by defining their allowed operating-system access, important
challenges still remain, as pointed out in detail in the paper.
• SCA application development: The present state of SCA-based application development and tools are reviewed, concluding that a further improvement of the efficiency
of the development of SCA-based applications is needed.
• Common Object Request Broker Architecture (CORBA)-related challenges: The issues with the use of CORBA as a middleware in SCA are pointed out, along with
recent developments. Though a leaner and even more efficient middleware for SDR
than CORBA is very much desired, a clear path to such a middleware is missing.
39
4. R ESULTS AND I MPLICATIONS
• SCA challenges and alternative architectures: Alternative architectures to SCA are
reviewed and an architectural outlook is provided.
Challenges and Opportunities Related to Computational Requirements of SDR
This section reviews the computational requirements of SDR as well as available computational elements. For handheld units, multiple SIMD type processors are seen as a particularly interesting trend. While the increases in processor performance each year will make
it gradually easier to meet the requirements of existing standards, meeting the requirements
of new high-capacity standards will still be challenging for SDR even in the years to come.
Security-Related Challenges
The security related challenges include secure software download and authorization,
SDR high-assurance platforms and portability issues of the security modules. There are
a number of publications on secure download and software authorization. However, selecting the right solutions for a particular platform and ensuring that they resist all possible
threats is still a challenging task in each case. For high-assurance platforms, the ’Multiple
Independent Levels of Security’ (MILS) is a particularly promising approach. Portability
of security related code is difficult between different domains of SDR development because
security architectures and Application Programming Interfaces (APIs) to a large degree are
not published or interchanged between domains. MILS has the potential of also reducing
such obstacles.
On the issue of portability it should additionally be noted that it is the ’security by obscurity’ attitude of many governmental institutions, i.e. the unwillingless to publish any
security-architecture related information, including just the APIs, that make portability of
security related modules difficult for SDR.
Regulatory and Certification Issues
Regulatory changes in the USA and Europe to accomodate SDR, are reviewed and projections made for the further progress in this area. The need for a Software Communications
Architecture certification authority in Europe is highlighted.
Opportunities Related to Business Models and Military and Commercial Markets
The emerging business opportunities and the possible business restructuring that will
occur due to the separation of product development into waveform software development
and platform development are pointed to. Furthermore, product opportunities in both the
military and civilian domains are reviewed.
4.1.1.2
Topics not Included
Software Defined Radio requires base technology from a number of different areas and may
be viewed from a number of different perspectives. While Paper A covers a wide range of
aspects, considerations about the length of the paper meant that it could not cover every one,
as mentioned earlier. In particular, SDR front-end technology, including Digital-to-Analog
(D/A) and Analog-to-Digital (A/D) converters, wide-band radio frequency amplifiers and
up- and down-converters and other central issues are not included. Readers interested in
this topic are advised to consult [21, 32] or some of the related references listed in Paper A.
Section 2.1 in this thesis also provides some introduction to A/D fundamentals.
Another such topic is the environmental aspect of SDR. SDR has the potential of prolonging the life cycle of radio units due to the enabled waveform software upgrade model
40
4.1. Research Goal 1, Software Defined Radio Challenges
rather than a unit replacement model, and thus from a life cycle point of view this change
will be an improvement. The increased power consumption of SDRs, however, opposes the
recent trend of ’Green Telecommunications’ where the goal is energy-efficient telecommunications.
4.1.1.3
Implications
Paper A is aimed at researchers and development engineers who want to have an introductory overview of the field of SDR and what the remaining challenges are. It is also aimed
at all other disciplines and parties taking part in the SDR evolution, including business developers, regulators, security organizations, software and hardware manufacturers, cellular
infrastructure governmental purchasing offices and more. The paper is the only Software
Defined Radio tutorial in the IEEE Communications Surveys and Tutorials online journal.
Paper A also reflects the candidate’s projections and ideas for the way ahead for SDR, in
the hope that these will have a positive influence on the evolution of SDR.
4.1.2
Other Work Related to Challenges of Software Defined Radio
During this doctoral work the author has been a member of ”The Regular Task Group on
SDR, RTO-IST-080 RTG-038 Software Defined Radio”. This NATO Research and Technology Organization group has the goals to ’Share knowledge & experience of (multi)national
SDR/SCA developments’ and to ’Investigate portability and interoperability’. The group
has members from a number of European countries, as well as Canada and the USA.
The interoperability and portability work in the group uses Stanag 4285, which is a rather
low-complexity High Frequency (HF) modem waveform with a maximum bit rate of 3600
bits / second. Base code written in C was provided by one of the industrial partners in the
group, TELEFUNKEN RACOMS. The low-complexity HF waveform was chosen due to the
availability of code, and because the primary focus was on SCA-related issues of portability
and interoperability, rather than performance.
In connection with work in this group, the author ported one version of the transmitter
base code into an SCA-based application running with the OSSIE Core Framework from
Virginia Tech. This was used together with a receiver part of the same system that had
been converted by a master’s thesis student. Similar exercises were done in Turkey, using
a proprietary Core Framework, and in Germany, using Communications Research Centre
Canada’s SCARI Core Framework. Subsequently, files with baseband and passband data
samples and at various bit rates and interleaver settings were exchanged between the three
nations. All TX-generated files were interpreted successfully at all the different SCA-based
receivers. From this exercise it was concluded that a simple basecode, such as Stanag 4285,
can be converted to an SCA-based form with low to medium complexity effort, and that the
resulting applications readily produced interoperable transmitters and receivers.
As a part of an investigation into portability of Digital Signal Processor (DSP) deployed
components, the author studied and reported on the porting of one of the initially General
Purpose Processor (GPP) deployed components (the ’coder’) of Stanag 4285 TX into a DSPdeployed component, using FFI’s Spectrum Waveform Development Station [53] platform.
This involved replacing the GPP coder component with a GPP coder proxy and a DSP
coder worker component, with the associated changes to both the SCA-based application
and the functional code.
41
4. R ESULTS AND I MPLICATIONS
Coder
(proxy on
GPP/CORBA)
Coder
component
(GPP/CORBA)
coder_worker
(on DSP)
F IGURE 4.1: The GPP coder component was replaced by a coder adapter component and a DSP
coder worker component, on the Spectrum Waveform Development Station platform.
The conclusions from this work were:
• Porting the code from the GPP implementation to the DSP implementation was little
work in itself, but getting the SCA related changes correct in every detail consumed
quite a lot of time
• The data communication from/to the GPP adapter component to/from the DSP component required programming at a fairly low abstraction level, involving the Spectrum proprietary quicComm API, and the definition of specific memory locations. If
subsequently we wanted to port this application to a platform with a different API
or different processors, this would require reprogramming. As mentioned in Paper A,
standardized API (e.g. the Modem Hardware Abstraction Layer (MHAL)) or CORBA
on DSP would be preferable for such platform to platform porting
• The Software Communications Architecture was of limited benefit in providing portability in this case: It helped by concentrating the platform dependent code, it provided
a means of obtaining a handle to the processor where the target code was deployed,
and provided a defined way to load the target code to the DSP
The author has co-authored a paper detailing the activities and conclusions of the Task
Group [33]. The author is also a co-author of the group’s report [34].
4.2
Research Goal 2, SCA Workload Overhead
4.2.1
Paper B: ”On Workload in an SCA-based System, with Varying
Component and Data Packet Sizes”
Tore Ulversøy and Jon Olavsson Neset, ”On Workload in an SCA-based System, with Varying Component and Data Packet Sizes,” in NATO RTO Symposium on Military Communications, Prague, Apr. 21-22, 2008.
42
4.2. Research Goal 2, SCA Workload Overhead
4.2.1.1
Background and Context
The Software Communications Architecture (SCA) allows applications to be built as compositions of components. The software components implement the SCA Resource interface,
and are hence also termed ’Resources’. In the applications, each component is an entity
with defined interfaces to its run-time environment, and each component communicates with
other components through defined ports.
In addition to its functional code, the component realization also implements the interfaces prescribed by SCA, i.e. the Resource interface and the interfaces inherited by Resource. The realization also includes a set of descriptive XML files that are defined by SCA.
For CORBA-enabled processors, the communication between the ports is facilitated through
the CORBA middleware.
This component approach has the advantage that code can be reused in other applications
by the reuse of components. As SCA is a distributed architecture, it also defines a scalable
system where more processors can be added when more processing power is needed.
Disadvantages of this component approach include the workload overhead due to the
CORBA middleware as well as the overhead introduced by separating code that could have
been run in a single process and thread into multiple processes and threads.
Having fine application granularity, i.e. the application is split into a number of components, promotes reusability of the components in that entities that are common to several
waveforms, such as an interleaver or modulator used in several waveforms, may be put into
separate components.
However, the finer the granularity, the greater the overhead. Paper B investigates these
workload overhead effects when the granularity is such that more than one SCA-based component needs to run on one CORBA-capable GPP.
As reviewed in Chapter 3, the literature prior to Paper B on workload overhead in SCAbased systems was very limited and there was no research exploring the combination of
granularity and intercomponent packet size variation in depth. As such, Paper B clearly fills
a gap in the literature.
The investigations were carried out on a Linux system using the OSSIE [36] open source
Core Framework and toolsets from Virginia Tech (the specific version numbers are listed in
Table B.1 in the paper) and the required omniORB [116] Object Request Broker. Prior to
selecting this platform, two other platforms had also been considered: FFI had an available
Waveform Development Station platform from Spectrum Signal Processing. While this had
the advantage of being a heterogeneous environment for applications, having both DSPs
and FPGAs in addition to the GPP, it had several disadvantages: The operating system on
the GPP was Windows XP, which was considered less convenient than Linux for workload
type measurements due to many unpredictable background tasks. The proprietary Core
Framework and toolset software on the platform and the lack of source code availability,
were also negative factors, as was also the fact that it was a single unit shared between
many researchers. Another open source Core Framework and toolset, the SCARI Open
[117], was also considered, but its Java implementation made it less attractive for workload
measurements than OSSIE’s C++ implementation. OSSIE was and has continued to be,
a popular tool for research on SCA-based SDR, with currently 58 publications (including
Paper A, Paper B, and the coauthored paper [33]) listed on its website [118].
In the measurements done as part of the investigations, the application software components (Resources) are, as is the normal and default way on a GPP, instantiated as separate
43
4. R ESULTS AND I MPLICATIONS
processes. This was also the default and only way of instantiatiating components in the
OSSIE Core Framework and toolset that was used for the analysis in Paper B, and was consistent with related work, e.g. [75, 76, 79]. The application instantiation is initiated from the
OSSIE tool ’alf’, which invocates the ’create’ method of the SCA ApplicationFactory. This
creates the application instance, allocates capacities, instantiates the Resources through the
execute method of the ExecutableDevice, connects the ports and registers the application, as
illustrated in Figure 4.2.
ApplicationFactory::Create()
ALF
Request profile
Create
application
instance
Allocate device
capacities
Instantiate
components
(Resources)
Connect
Resources
Inform
DomainManager
Call
ApplicationFactory::Create()
F IGURE 4.2: Simplified sequence [42] for the creation of a waveform application in OSSIE.
The workload test applications are run with a programmed packet size, and a programmed number of packets per second, the ’packet rate’. The packet rate is set using
blocking reads of samples from an audio-card, e.g. 40 samples at an audio sample rate of
1600 samples per second to get a packet rate of 40 packets per second.
The empirical analysis used the CPU workload measurement tools OProfile [119] and
SYSSTAT sar [120]. Other alternatives here would have been manual instrumentation of the
code, i.e. insertion of time measurements in the code, or using a profiler tool that would automatically insert instrumentation instructions in the code. OProfile and SYSSTAT sar were
preferred to the latter alternatives due to concerns with the instrumentation code possibly
adding workload and becoming a source of error, and also because OProfile and SYSSTAT
sar are simple tools to use and they monitor all activity on the processor.
In the SYSSTAT sar measurements, it is assumed that the workload background activity
from other processes than the ones related to the running of the SDR application in the SCA
environment, is negligible. Background activity, with just four idle terminal windows open
in Linux, was measured with SYSSTAT sar over 5 periods of 40 seconds to be on average
less than 0.1 % workload combined for the user and system CPU space, implying that this
assumption is good in the average case.
4.2.1.2
Results
The paper provides results concerning the workload effects of varying the application granularity of an SCA-based application on the total workload of a processor. All the components
are deployed on a single General Purpose Processor (GPP), simulating that the granularity
is such that many components need to be run on the same CORBA-enabled GPP.
The results provided in the paper consist of empirical analysis, the formulation of two
simple analytical models, estimates of parameters of the models and comparison of modeled
versus measured data.
For the empirical analysis, both a ”real” waveform application, the transmitter part of the
HF standard Stanag 4285, and a ”synthetic” application, consisting merely of Finite Impulse
Response (FIR) filters, are used.
44
4.2. Research Goal 2, SCA Workload Overhead
For the Stanag 4285 application, versions with 2,7, and 11 components were compared,
as well as a single-process ’C’-implementation. The comparison was made both at approximately the original symbol rate of the waveform (2400 symb/sec), but also at higher symbol
rates in order to get more significant CPU readings (above approximately 4 %) and thus
better accuracy of the results. In general, it was observed that the more components in the
implementation, the higher workload for the processor. As an example, when compared at
a symbol rate of 25600 symb/sec, it was found that while the ’C’ implementation lead to
a total user+system CPU load of 4.4%, the corresponding CPU load of the 11-component
SCA-based implementation was 16.6%, see Figure 4.3.
Stanag 4285 TX
Non-SCA
Stanag
4285 TX
Data
Source
FEC
Encoder
Data
Source
Float to
fixed
converter
Interleaver
FEC
Encoder
Forwarder
Symbol
Mapper &
Scrambler
Interleaver
Forwarder
Symbol to
I/Q & TX
Filter
Symbol
Mapper &
Scrambler
Forwarder
Float to
fixed
converter
CPU: 4.4%
Data
Sink
CPU: 5.5%
Data
Sink
CPU: 11.2%
Symbol to
I/Q & TX
Filter
Forwarder
Data
Sink
CPU: 16.6%
F IGURE 4.3: User+system CPU % for the non-SCA (’C’) 4285 TX version, compared to three
SCA-based versions with different granularity. Measured at a symbol rate of 25600 symb/sec.
Further measurements were done with the synthetic waveform applications, using 2, 3,
5, and 11 components, and again compared to a standalone ’C’ implementation. Again it
was confirmed that splitting the application into more components led to more workload
overhead. The relative size of the overhead to the total workload, however, depended on the
workload of the functional useful work.
Also with the synthetic workload, the relation between the CPU workload and the packet
size, i.e. the size in number of floating point numbers sent from component to component
through CORBA, was studied. A clear and significant workload overhead dependency on
the packet size was found.
Two simple analytical models were formulated for the total workload of the applications.
The first is a lower-bound model, taking into account the workload of the useful functional
work in the components and the workload of the communication through CORBA and the
data conversions associated with that, but not taking into account any increase in overhead
from context switches, i.e. the process of saving and restoring state as the active thread or
process changes. The second model additionally takes into account the direct and indirect
costs [121] of the context switches.
Using a test application with only one source and one sink component, the parameters
in the lower-bound model were estimated. A comparison with measured data showed, as
expected, that the lower bound model underestimates the actual workload, particularly at
large packet sizes. Still, it predicts the major part of the workload, particularly at low packet
45
4. R ESULTS AND I MPLICATIONS
sizes. The underestimation is greater the larger the packet size. A likely contributor to
the underestimation is that the first model does not include the indirect losses due to the
required updating of the caches following a context switch. Since the memory footprint of
each component contains several buffers of the same size as the packet, these losses logically
increase with the packet size.
For the model including context switches, only rough estimates of the parameters were
made, and the numerical output in Figure B.10 should therefore only be considered as an
example and an illustration of the model output. It is expected, however, that this second
model will give better estimates than the lower-bound given the necessary tools to estimate
these parameters accurately.
Table 2 in the paper also illustrates the large startup cost when pushing a minimum packet
from one component to the other, implying that for workload optimization it is advantageous
to avoid frequent intercomponent transmission of small packets.
4.2.1.3
Comments on Measurement Accuracies
In general the chosen measurement tool for measuring processor workload, SYSSTAT sar
[120], provided consistent readings. A standard deviation of between 0.01% and 0.05%
of the presented average values was found when evaluating the variation in the different
measurement series. At low workload numbers, however, small offsets due to factors such
as background activity have a more significant influence than at higher workload numbers.
Therefore, in setting the packet rate in the experiments, typically rates that provided higher
workload values than 5-10% were chosen in order to minimize this effect.
The author is aware that the paper could have been more consistent in the presentation
of numerical data, and that in many cases more decimals than are statistically significant are
presented.
As mentioned, the estimates of the parameters in the second model are very coarse ones,
and are only meant for the purposes of illustrating this model. In particular, tCSD is in the
paper coarsely estimated at 5µsec, which, however, overestimates tCSD . tCSD has since
been measured, using the procedure and available software from [121], to be 1.6µsec. The
exhaustion of the L2 cache size, refer to the edge of the black graph in Figure B.10, is also
not modeled in detail. Also, the value of the fraction ’c’ parameter is selected only for
the purposes of illustrating the model. The reason for not proceeding with more accurate
parameter estimates of this model was a combination of available time and software tool
issues, with difficulties arising due to some features not being available with the particular
operating system kernel that was used.
4.2.1.4
Discussion and Implications of the Results
The results are of relevance for SCA-based SDR where more than one component is deployed on one of the CORBA-enabled processors in the system. While the measurements
and parameter estimates apply for the specific hardware, software and middleware environment investigated in the paper, the trends and conclusions are expected with a high probability to be also valid for other SCA-based platforms, since the sources for the observed
workload overhead (data copying, CORBA and transport overhead, context switching) will
also be present on other SCA-based platforms. As an example supporting this statement,
and as mentioned in Section 3.2, similar conclusions regarding component granularity are
46
4.2. Research Goal 2, SCA Workload Overhead
reached in a later paper [76] using a different Core Framework and toolset (the commercial
version of SCARI), and a different ORB (TAO). The results will be relevant also for heterogeneous platforms, with multiple types of processing elements, as the CORBA-enabled
processors on these typically will run multiple components. However, it will vary with the
specific design principles applied, whether these components run performance-critical parts
of the application or not. Further insights into heterogeneous platforms, and further insights
into parameters that can be optimized on CORBA-enabled processors beyond that examined
in Paper B, will be provided in the next section.
Based on the results in Paper B, there are several conclusions that can be drawn that
have implications for the design of SCA-based applications for the above mentioned environments:
The results show that in designing an SCA-based component, care should be taken such
that the functional processing in the component is significant relative to the CORBA-related
overhead, such that the overhead does not dominate the CPU workload. In other words,
better computational efficiency is achieved by having components with significant packet
processing workload.
The above results imply that the computational efficiency is enhanced by having course
application granularity with few components per processor. In this way, there is a compromise between reusability of the code, which is made easier by having a fine application
granularity, and application computing efficiency (or low computing overhead).
Computing efficiency is one of the factors that needs to be taken into account when determining the block sizes communicated through the CORBA connections. Table 2 in the
paper shows the large startup cost for pushing a small data packet from one component to the
other, and illustrates that for computing efficiency it is better to send the same amount of data
in few large packets rather than in many small ones. There are however several other considerations that need to be taken into account when determining packet sizes. Data throughput
considerations point in the same direction to larger packets, it has been shown [122] that
block sizes are a major factor influencing the throughput, with throughput increasing with
block size in an asymptotic way. Latency, in this context defined as the delay between the
time when a packet is sent in one component until it is received in the target component, is
another factor that needs to be taken into account, with latency increasing with packet size.
Another factor is the frame size of the actual waveform. Further, local waveform requirements may dictate frequent feedback from one component to another, in which case it may
be difficult to increase the packet size.
The two models in paper B can be used to predict the total workload and workload
overheads for a given system, with a varying number of components. By estimating the
parameters of the two models for the specific SCA-based system with the specific processor,
Core Framework, toolset and ORB being used, workload estimates can be made, that can
provide guidance as to the application granularity.
4.2.2
Beyond Paper B: An Overview of Optimization of Processing
Efficiency of SCA-based Software Defined Radios
We have learnt from the results in Paper B that in order to keep workload overheads at
satisfactory low levels, it is beneficial to have a low application granularity. Also, intercomponent communication should preferably have low packet rates and instead accumulate
47
4. R ESULTS AND I MPLICATIONS
data into larger packet sizes, if allowed by the functional restrictions of the SDR waveform
application.
But how can the workload overheads be reduced, at a given granularity, and when packet
rates cannot be further reduced? In this section, a more holistic view is taken, and a wider
overview of the various ways of optimizing SCA-based SDRs is given.
The overview is divided into considerations for optimizations of SCA-based GPP environments, and considerations for SCA-based heterogeneous processing environments,
where in the latter case the platform has different types of processors, typically including
GPPs, DSPs and FPGAs.
For the SCA-based GPP environment case, some further workload and also latency measurements are provided. These have been conducted on exactly the same platform with exactly the same software (RedHat Linux operating system, OSSIE [36] Core Framework and
toolset, and omniORB [116]) with exactly the same revisions as the ones in Paper B (refer
to the righthand column of Table B.1 in Paper B for further details).
For the workload measurements, the same measurement tool SYSSTAT sar has also been
used, as well as the same synthetic applications with FIR-filter processing componenents,
having 2,3,5, or 11 components (W2, W3, W5, W11). The measurements are average values
over 5 periods of 40 seconds. For latency measurements, very careful instrumentation of the
CORBATSTSRC (a source component sending packets) and CORBATSTSINK (a packet
receiver component) components of the CORBATEST application was implemented. The
time measurements used a high resolution timer method, relying on counting the cycles that
the CPU has gone through since startup, with source code from [121]. In order to add minimally to the CPU workload, the measurement samples were accumulated in arrays while
running, then dumped to file when the measurement sequence was finished. The time measurements were positioned in the following way: One prior to converting the floating point
data to be sent to the FloatSequences required by the OSSIE-defined realFloat interface, one
before the invocation of pushPacket to send the data to the CORBATSTSINK component,
one immediately after in order to monitor when control is returned to the CORBATSTSRC
component, one when the data is received in the CORBATSTSINK and the final one when
the data have been converted back to the native array of float type, as illustrated in pseudocode in Figure 4.4.
CORBATSTSRC
while (1)
{
blockUsingAudioDevice(); //correct packet rate
CORBATSTSINK
while (1)
{
getData(); //wait until pushed data available, get it
beforeFloat[packetNumber]=measureTime();
afterGet[packetNumber]=measureTime();
afterGet[packetNumber]
measureTime();
copyFloattoFloatSequence(); //copy to FloatSequence
copyFloatSequencetoFloat(); //copy to float
beforePush[packetNumber]=measureTime();
pushPacket(); //remote invocation to transfer data
return[packetNumber]=measureTime();
afterFloat[packetNumber]=measureTime();
}
}
F IGURE 4.4: Illustration of the CORBATEST test application for latency measurements, with pseudocode showing the placement of the time measurements in the components’ processing loops. The
packet size used was 5000 floating point numbers.
48
4.2. Research Goal 2, SCA Workload Overhead
4.2.2.1
Optimization of SCA-based General Purpose Processor Environments
ORB Selection
Selecting an efficient ORB is a prioritized optimization consideration needed for an SCA
GPP environment. A number of ORBs exist, ranging from free general-purpose ones to commercial highly optimized ones. An example of the latter category is ORBexpress RT [123],
which is a Real-Time CORBA compliant ORB, implying that it has Real-Time CORBA
quality-of-service control features. In this present work, omniORB has been used, due to it
being required by OSSIE. On its website, omniORB is promoted as a free high-performance
ORB [116].
ORBs are tunable in that they have a number of configuration parameters, such that
the ORB may be optimized by further tweeking these adjustments. In omniORB, the
configuration parameters are in the file omniORB.cfg. In Paper B, the file is used with the
default settings as when installed with OSSIE.
The Contribution from the SCA Core Framework
According to [83] and referring to OSSIE and to latency measurements, the contribution
from the SCA Core Framework itself, relative to the contribution from CORBA, is small.
This is natural as the Core Framework is basically a ’Deployment and Configuration
engine’ [124], implying that when all the components are deployed and running, it has a
memory footprint overhead but is in idle mode. However, again referring to OSSIE, there
is a contribution to latency and workload overhead from the OSSIE-defined StandardInterfaces in that an additional data copy and thread switch is made, relative to a direct CORBA
invocation [81]. These effects are also evident in Paper B. Hence, optimizing the way one
uses and interfaces with CORBA may have some limited potential for reducing memory,
latency and workload overhead.
Optimization of Data Converting and Formatting Operations
A significant contributor to workload overhead and latency are all the data-converting
and copying operations, all the way from the native data representation through the CORBA
Common Data Representation format and to the native format in the receiving component.
The operations are simple copying and formatting operations that put the data into different
representations. When processed serially on a processor, these are time consuming. Obviously, though, there is a lot of parallellism in these operations (the same instruction may be
applied to many data items in parallel). In this respect, it is very interesting to note some
results from subsets of ORB functionality having been implemented at native gate level on
FPGAs. Such implementations make it possible to create very parallel implementations of
data formatting operations. When sending packets between such FPGA ORBs, on very high
throughput and low latency transports, impressively low latencies have been achieved, in the
order of a few hundred nanoseconds1 [125]. A comparison of packet transmission between
two GPP ORBs and between an FPGA ORB and a GPP ORB [126], points in the same
direction of the FPGA ORB being faster than the GPP one. This illustrates the potential
effectiveness of a parallel implementation of the formatting and decoding of the CORBA
packets to/from the transport. If, in the same way as carried out on these FPGA ORBs, a
dedicated CORBA packet formatter and deformatter could be an integrated hardware-part
1
The specific packet size was not specified in this reference.
49
4. R ESULTS AND I MPLICATIONS
of specific GPPs, this would have a potential of significantly lowering latency and processor
workload overhead. This would come, however, at the expense of having to manufacture
specialized CORBA processors (the integration of a GPP and CORBA accelerator circuitry)
that would have a smaller niche market than fully general processors.
Transport
Another important performance factor is the underlying transport. While the default
transport typically is TCP/IP, CORBA may be configured to use other transports, e.g. faster
shared memory ones or circuit switched rather than packet switched ones. Using a faster
transport reduces the transport induced part of the latency.
Optimizing the choice of transport is also important in the specific case discussed in
Paper B, i.e. when having such a high granularity that several SCA-based components run
on the same processor. Here, one may take advantage of intra-processor transports, such
as a Unix domain transport. It has been shown in [81] that for a single-processor system
using OSSIE and omniORB, latencies may be significantly reduced when switching to a
Unix transport, rather than using the default TCP/IP.
These results have been confirmed in this work here, by using the instrumented CORBATEST application illustrated in Figure 4.4 with a packet size of 5000 floats. Since the
measurements are taken before the full packet is sent and after the full packet has been received, the latencies are packet latencies representing the duration between the first bit sent
and the last one received (rather than a bit oriented latency that would be between the first
bit sent and first bit received).
Figure 4.5 shows a histogram (101 measurements, 10 microsecond bins) of the packet
latency between before pushPacket and after getData (i.e. not including the conversion
from/to native floating point data), for the default TCP/IP transport and a Unix transport,
respectively. The outlier point is the first packet that comes through, and that takes considerable more time than the next ones, probably due to ORB activity carried out when handling
the first message. It is observed that the latency is considerably lower when using the Unix
transport. It is worth noting that the change of transport is done by only changing three lines
of parameter settings in the omniORB.cfg file, no other changes are needed!
Figure 4.6 in the same way shows the total latency from native float data to native float
data in SINK, again for TCP/IP and Unix transport. It is observed that the absolute difference
in latency is about the same, but that the relative difference is less since the converting to
and from native float data is the same in both the TCP/IP and Unix cases.
It is to be expected that the improvement from using the Unix transport rather than
TCP/IP will be visible also on SYSSTAT sar workload measurements. To verify this,
the workload measurements from the left part of Figure B.6 in Paper B were repeated,
first with the original omniORB settings with the default TCP/IP, then using the Unix
transport. The results are shown in Figure 4.7. It is seen that a small workload overhead
improvement is achieved from using the Unix transport, and that the dominant part of the
improvement is in the system space. That the improvement is in the systems space is natural since both the TCP/IP and the Unix sockets processing take place inside the Linux kernel.
Twoway versus Oneway
Another variable is that CORBA defines different levels of quality-of-service in terms of
the reliability of the CORBA remote invocations. The default type of invocation is the two50
4.2. Research Goal 2, SCA Workload Overhead
Latency Comparison, TCP and Unix Transport (pushPacket‐>getData)
70
Number of packets
umber of packets
kets
60
50
TCP average:
Unix
U
i average:
201 µs
107 µs
107
40
30
TCP
20
Unix
10
0
30
60
90
120
150
180
210
240
270
300
330
360
390
420
450
480
510
540
570
600
630
660
690
720
750
780
810
840
870
900
930
960
990
1 020
1 050
1 080
1 110
1 140
0
Latency (microseconds)
F IGURE 4.5: Packet latency from prior to pushPacket in source component to after getData in sink
component, for two different transports, TCP/IP and Unix. Packet size 5000 floating point numbers.
Latency Comparison, TCP and Unix Transport (float‐>float)
90
Number of packets
packets
80
70
60
TCP average:
Unix average:
269 µs
174 µs
50
40
TCP
30
Unix
20
10
0
30
60
90
120
150
180
210
240
270
300
330
360
390
420
450
480
510
540
570
600
630
660
690
720
750
780
810
840
870
900
930
960
990
020
1 020
050
1 050
080
1 080
110
1 110
140
1 140
0
Latency (microseconds)
F IGURE 4.6: Packet latency from prior to copying float data to FloatSequences to after all the data is
in floating point data representation in the sink component, for two different transports, TCP/IP and
Unix. Packet size 5000 floating point numbers.
way one, where the remote invocation blocks until it completes and the client gets a reply
back. The reply optionally may return data in out or inout parameters, but it is worth noting
that the behavior is a blocking one even if the invocation is not to return data. The advantage
with two-way invocations is reliability; the client will have information about whether the
invocation was successful or not. The disadvantages with two-way are (1) that the client
does not resume control before the signalling from the remote object is received, and (2)
that the return-signalling consumes additional overhead in terms of transport bandwidth and
processing. OSSIE, in its ’StandardInterfaces’ of which the ’realFloat’ interface is used in
Paper B, uses two-way invocations.
One-way invocations are ’ORB best effort’ or also referred to as ’at most once’ invocations. Here, the client performs the remote invocation (in our case pushes out the packet of
51
4. R ESULTS AND I MPLICATIONS
CPU Workload versus Configuration & Transport
B=2000, PR=40
16
14
CP
PU WL [%]
12
10
System
8
User
6
4
2
W11 N=10
Unix
W11 N=10
TCP
W5 N=10
Unix
W5 N=10
TCP
W3 N= 10
Unix
W3 N=10
TCP
W2 N=10
Unix
W2 N=10
TCP
FUNC N=10
0
F IGURE 4.7: Workload comparison, Unix and TCP.
processed data) and resumes control directly afterwards2 .
In a real geographically distributed environment with varying quality-of-service in the
communication network, reliable invocation mechanisms are a must in many applications.
In a stable environment inside the chassis of a Software Defined Radio, it is not so obvious
that two-way invocations are needed.
In order to see if oneway versus twoway have significant implications on the latency, the
time of return of control to the client (the ’sender’ component) as well as workload overhead,
another experiment was performed, again using the same environment and application components as in Paper B. Here, the pushPacket method of the realFloat interface was modified
into a oneway one by simply adding the oneway keyword into the relevant CORBA Interface
Definition Language (IDL) file, i.e. from
void pushPacket(in PortTypes::FloatSequence I);
to
oneway void pushPacket(in PortTypes::FloatSequence I);
then remaking the ’StandardInterfaces’ library as well as the CORBATEST application
and the W2, W3, W5, and W11 applications.
The CORBATEST application was run, in the same way as for twoway invocations, to
measure latency of 101 packets of 5000 floats, as well as the time of return of control to the
2
The latest CORBA specifications additionally allow defining quality-of-service for the one-way more
specifically, but omniORB does not have this feature and it is therefore not described here.
52
4.2. Research Goal 2, SCA Workload Overhead
CORBATSTSRC component, for both the TCP and Unix transport. The measurements are
presented in Figures 4.8 and 4.9, and compared to the twoway ones. Here, time 0 is the time
before the packet was converted from the original type float data in the CORBATSTSRC
component. Twoway is shown on the positive vertical scale, and oneway on the negative
one, for purpose of easier comparison. The first packet is not included in these figures.
twoway
Latency Comparison, Twoway versus Oneway, TCP
100
80
40
20
0
‐20
0
8
16
24
32
40
48
56
64
72
80
88
96
104
112
120
128
136
144
152
160
168
176
184
192
200
208
216
224
232
240
248
256
264
272
280
288
296
Number of packets
umber of packets
60
‐40
40
oneway
‐60
‐80
‐100
Latency (microseconds)
Twoway before push
Twoway return
Twoway after get
Twoway after float
Oneway before push
Oneway return
Oneway after get
Oneway after float
F IGURE 4.8: Histogram of packet latency for oneway and twoway invocations, TCP/IP transport.
The time sample just prior to converting from the float data type in the SRC component is taken as
time 0 for each packet transmission. The oneway measurements are shown on the negative vertical
scale. The packet size is 5000 floating point numbers. Measurements of 100 of the 101 packets are
shown (the measurements of the first packet skipped due to the outlier effect). Bin size 2 µsec.
twoway
Latency Comparison, Twoway versus Oneway, Unix
100
80
40
20
0
‐20
0
8
16
24
32
40
48
56
64
72
80
88
96
104
112
120
128
136
144
152
160
168
176
184
192
200
208
216
224
232
240
248
256
264
272
280
288
296
Number of packets
umber of packets
60
‐40
40
oneway
‐60
‐80
‐100
Latency (microseconds)
Twoway before push
Twoway return
Twoway after get
Twoway after float
Oneway before push
Oneway return
Oneway after get
Oneway after float
F IGURE 4.9: Histogram of packet latency for oneway and twoway invocations, Unix transport.
53
4. R ESULTS AND I MPLICATIONS
The figures confirm the earlier return of control in the oneway cases. For TCP, an average
improvement of the float → float latency from 269 µsec to 240 µsec is observed. For the
Unix transport, the measured float → float latency is actually worsened by 6 µsec to 180
µsec, possibly influenced by the additional context switches due to the early return of control
to the CORBATSTSRC component. The total time to the latest measurement point in each
packet transmission is higher in the twoway case, though.
SYSSTAT sar workload measurements, again comparable to the left part of Figure
B.6 in Paper B, are provided in Figure 4.10. The measurements show a small but evident
workload reduction by using oneway invocations, and as expected the reduction is larger for
the higher-granularity application versions.
CPU Workload versus Configuration, Transport and type of Remote Invocation
B=2000, PR=40
16
14
CPU WL [%]
%]
12
10
System
8
User
6
4
2
W11 N=10 Unix
ONEW.
W11 N=10 TCP
ONEW.
W11 N=10 Unix
TWOW.
W11 N=10 TCP
TWOW.
W5 N=10 Unix
ONEW.
W5 N=10 TCP
ONEW.
W5 N=10 Unix
TWOW.
W5 N=10 TCP
TWOW.
W3 N=10 Unix
ONEW.
W3 N=10 TCP
ONEW.
W3 N=10 Unix
TWOW.
W3 N=10 TCP
TWOW.
W2 N=10 Unix
ONEW.
W2 N=10 TCP
ONEW.
W2 N=10 Unix
TWOW.
W2 N=10 TCP
TWOW.
FUNC N=10
0
F IGURE 4.10: Workload comparison, oneway and twoway invocations. Block size 2000 floating
point numbers.
Forcing Same-Address Space
With advanced SCA modeling tools, another way of optimizing performance for the
particular case discussed in Paper B, is forcing a same-address space of several SCA Resources [74], and using an ORB that is able to take advantage of this by substituting remote
invocations by local calls. According to the SCARI team of authors in a 2009 publication [74], ”with proper modeling tools, an SCA ResourceFactory can be automatically generated to allow the instantiation of pre-existing SCA Resource into a single address space ”,
and the authors have demonstrated the latency benefits of this approach. Such a ResourceFactory, in their implementation, is compiled and linked together with all of the SCA Resources that it is to instantiate, into a single executable [124]. The ResourceFactory is then
actually the entity which is instantiated by the SCA ApplicationFactory (through a call to the
Execute method of the ExecutableDevice), and the ResourceFactory then creates servant in54
4.2. Research Goal 2, SCA Workload Overhead
stances for the Resources as threads in the ResourceFactory’s address space. This contrasts
with the normal case where each SCA Resource is directly instantiated as a process.
The possibility of generating and deploying such a ResourceFactory was not present
in OSSIE. In order to perform an equivalent test of such a deployment without having the
necessary tool support, the CORBATEST and W11 applications were modified into having
same-space instantiations of all their components. Using W11 as an example, the procedure
was as follows:
• The first component in W11, SRC, was modified to include all the source code of
all the components, including both the SCA administrative code and the processing
thread code.
• The ’main’ file, that originally was set up to create just the SRC CORBA servant,
was carefully modified to setting up all the SRC, F1, F2 etc. CORBA servants, all as
separate threads.
• The ’main’ file was set up to use the original CORBA IDs and labels, such that the
XML files for the components and for the port connections could be reused exactly as
they were.
• All the binary files, except the SRC one, were replaced by Linux batch files that merely
echoed their name to the terminal window (just as a check to see that they had been
executed), then terminated.
In this way, the whole original application, including all of the CORBA related code,
could run inside the process space of the SRC component. For the OSSIE ’alf’ tool and the
ApplicationFactory, however, the application appeared exactly as in the multiprocess case.
Figure 4.11 shows a histogram of the latency of this same-address-space case compared
to TCP oneway. The latency is seen to be much lower in the same space case, with average
measurements of float→ float of 105 versus 240 µsec, and pushPacket→ getData of 37
versus 172 µsec.
The SYSSTAT sar workload measurements, ref. Figure 4.12, also show a very significant improvement from the cases where the SCA-based Resources are instantiated directly in separate processes. The workload is still seen to be higher than the reference
’C’-implementation, though. The same space case allows higher granularity for the same
overhead, e.g. the workload of the 11-component same space case is seen to be approximately equal to a 5-component implementation using Unix oneway.
The ResourceFactory solution appears very attractive for resource-constrained GPPs,
particularly if there is tool support such that the ResourceFactory can be automatically generated.
A major disadvantage with this single address space approach is, however, that implicitly some of the component isolation properties of the SCA Resources are lost, in that one
Resource may accidentally access the memory space of others. Another side effect is that
the invocations, being converted by the ORB to local calls (i.e. normal function calls) are
automatically twoway ones (even if oneway is specified). Thus, on the one hand the local
calls have low latency, on the other hand they have the normal function call blocking behavior that forces the calling object to have to wait [124], which according to one of the authors
of the above cited paper in some cases may have a slowing effect on execution [124]. This
was not observed as a problem in the OSSIE system measurements here, though.
55
4. R ESULTS AND I MPLICATIONS
Number of packets TCP oneway
100
Same space
Latency Comparison, TCP Oneway versus Same Address Space
‐60
80
60
40
20
‐20
0
8
16
24
32
40
48
56
64
72
80
88
96
104
112
120
128
136
144
152
160
168
176
184
192
200
208
216
224
232
240
248
256
264
272
280
288
296
0
TCP oneway before push
TCP oneway return
TCP oneway after get
TCP oneway after float
Same space before push
Same space return
Same space after get
Same space after float
‐40
‐80
‐100
Latency (microseconds)
F IGURE 4.11: Packet latency, TCP and same space. Packet size 5000 floating point numbers. Bin
size 2 µsec.
CPU Workload, W11 Application, Forced Local Invocations
Compared to TCP/Unix/Twoway/Oneway
B=2000, PR=40
16
14
CPU WL [%]
%]
12
10
System
8
6
User
4
2
1 N=10
W11
LOC.. ONEW.
1 N=10
W11
LOC. TWOW.
1 N=10
W11
x ONEW.
Unix
1 N=10
W11
P ONEW.
TCP
1 N=10
W11
Unixx TWOW.
1 N=10
W11
P TWOW.
TCP
5 N=10
W5
x ONEW.
Unix
FUNC
C N=10
0
F IGURE 4.12: Workload comparison, same-address space instantiated W11 compared to normal
instantiation W11 using combinations of oneway/twoway/Unix/TCP. In the same figure, W5 Unix
oneway and the single-thread reference, FUNC.
4.2.2.2
SCA-based Heterogeneous Processing Environments
CORBA for the Control Path
By far the most typical way of trying to avoid the negative effects of CORBA in SCA
on workload, latency and throughput of the SDR-application, is using CORBA mostly
for the control data path, where the volume of data is less and the timing is less critical.
56
4.2. Research Goal 2, SCA Workload Overhead
The most computing-intensive signal processing components in this case have ’worker’
implementations that typically are deployed on DSPs or FPGAs. The ’workers’ have
SCA-component proxy implementations that reside on CORBA-enabled GPPs. [127]. The
signal processing data exchange typically in this approach is through high-speed transports,
such as RapidIO [127], with standardized or platform-defined APIs for the communication
calls. Spectrum Signal Processing’s platforms are typically designed with this intercomponent communication approach in mind, and with Spectrum’s own quickComm API being
used for the non-SCA communication calls. This type of solution represents a practical
compromise that reduces the negative effects of CORBA and to a large extent concentrates
the platform-dependent application code. It still represents a less standardized approach,
with several communication API standards around and more issues with portability. It is
also a design pattern that complicates the application design relative to one consisting of
components for CORBA-enabled processors only.
Offloading the General Purpose Processor
Another proxy design pattern is that of keeping both the control and data communication
between components on CORBA-enabled GPPs, but ’offloading’ the GPP by running the
most processing-intensive functionality on specialized non-CORBA-enabled processors,
such as DSPs or FPGAs. Just as described for the approach above, and as described for the
experiment in Section 4.1.2, proxy components on the GPP are responsible for the ’worker’
component implementations on the special processors. A difference from the approach
above, is that the proxy component now receives its input data and sends out the processed
data through CORBA. The data is communicated to / from the worker through platformspecific transports and APIs (for the Waveform Development Station platform: a PCI bus
and the quickComm API). A positive aspect of this approach is that the design model
very closely resembles one where all the components reside on fully CORBA-enabled
processors. A further positive aspect is that with the processing residing in the special
processors, the footprints of the GPP components become small, reducing the indirect
costs of context switches. A negative aspect is that the through-CORBA communication
becomes critical to the signal processing path, such that the factors previously discussed for
optimizing CORBA on GPP environments now become very important again. The timing
of the data relaying to/from the special processor also becomes critical. In the experiment in
Section 4.1.2, this was accomplished successfully, and the communication to the worker, the
processing in the worker, and the data return, were all completed in time for the application
to maintain its processing speed.
All-CORBA
In recent years, lean ORBs for DSPs, and as mentioned even native-gatelevel ORBs for
FPGAs have become available, making it possible to extend CORBA in SCA onto these
special processors. An approach where CORBA communication is also used on the special
processors, is referred to as an ”all-CORBA” approach. The obvious advantage is consistent
communication between all the components in a waveform application. It also promotes
portability as platform-dependent proxy code for special processor communication is eliminated. Also, as mentioned previously, good performance with remarkably low latencies have
been reported for FPGA ORBs.
The disadvantages with the ”all-CORBA” approach include that CORBA still creates
57
4. R ESULTS AND I MPLICATIONS
workload overhead latency, compared to not using CORBA, particularly for DSPs and GPPs.
Another disadvantage is that it is not a full-feature CORBA that is present on the DSPs
and ORBs, only selected features are included. CORBA also creates significant memory
consumption overhead in the DSPs, and adds real estate overhead in the FPGAs.
4.3
4.3.1
Research Goal 3, Dynamic Spectrum Access
Architectures and Algorithms
Dynamic Spectrum Access Architectures
The architectural part of research goal 3 is treated mainly in Papers C and E. Paper C is a
proposal for a particular broker architecture concept, while Paper E provides a comparison
of DSA architectural concepts.
4.3.1.1
Paper C: Dynamic Frequency Broker and Cognitive Radio
T. Maseng and T. Ulversøy, ”Dynamic Frequency Broker and Cognitive Radio,” in The IET
Seminar on Cognitive Radio and Software Defined Radios: Technologies and Techniques,
The IET, Savoy Place, London, Sept. 18, 2008.
Background
Paper C outlines a Centralized Coordination DSA concept. Centralized Coordination
has the advantage that the use of the spectrum is managed through an infrastructure, such
that the interests of all spectrum users, including users in the form of silent receivers and
other ”hidden” nodes, may be taken care of according to predefined priorities and spectrum
decision algorithms. The paper is a conceptual proposal only, it does not include simulations
or prototyping.
Summary of Contributions
Paper C points to the issues associated with traditional frequency allocation and assignment, and also to the problems associated with using Cognitive Radio as a dynamic alternative to increase spectrum utilization. A particular Centralized Coordination concept, the
Dynamic Frequency Broker (DFB), is suggested. The DFB is a regional frequency coordination authority, which keeps a complete list of frequency assignments within an area and
maintains an updated terrain propagation path loss model of its area. The regional DFBs are
organized in a hierarchy, with the national frequency authorities or even higher-level authorities at the top. It assigns time limited spectrum leases to requesting radio nodes. A radio
node in this context may be an individual radio transceiver, a base station or any aggregation
of transceivers that negotiate as a group with the broker. The communication between radio
nodes and the broker, and the broker discovery mechanisms, are based on Web Services.
It is assumed that the Dynamic Frequency Brokers are connected to the Internet, and that
radio nodes that do not have a fixed connection, connect into the Internet through dedicated
control channels, or get their leases through proxies. Challenges related to a DFB solution,
such as security challenges, are also pointed to.
Figure 4.13 illustrates a typical scenario showing the lower level DFB that is responsible
for a geographical region, and including an Internet-attached DFB slave station that aids in
58
4.3. Research Goal 3, Dynamic Spectrum Access Architectures and Algorithms
—Fixed line Internet access continuously
—Internet access via radio service channels
—Prior service negotiation via the Internet
F IGURE 4.13: A DFB scenario illustrating different radio nodes and different ways of communicating with the DFB.
monitoring the spectrum as well as in providing a wireless gateway for control communication. The figure illustrates different types of radio nodes that receive spectrum leases from
the DFB, ranging from cellular base stations to ad hoc hunting team radio equipment and
wireless microphones. It also illustrates the various ways of communicating with the DFB,
ranging from online wideband Internet connections through wireless dedicated channels to
this Internet connection and to software proxies running on Internet-connected computers.
Policies
Coordination
Conflict
Conflict resolution
F IGURE 4.14: Illustration of a hierarchy of coordinating DFBs.
59
DFB, Communication Middleware
4. R ESULTS AND I MPLICATIONS
UDDI or Catalogue of DFBs
• The DFB is implemented as a We
Service
• XML-based messages over SOAP
and HTTP
WSDL
WSDL
• Nearest DFB found in UDDI or DF
Catalogue
• Web Service interface downloade
as WSDL file
Radio
Node
DFB
e.g. SOAP
messages
F IGURE 4.15: The use of Web Services in DFB.
Figure 4.14 illustrates the hierarchy of coordinating brokers in DFB, where the DFBs
at the lower level each have responsibility for the radio nodes in a particular geographical
region. Spectrum decisions are coordinated with neighbor DFBs. Spectrum policies migrate
from the upper levels, e.g. national levels, to the lower level DFBs. Spectrum conflicts are
pushed upwards for resolution.
Figure 4.15 illustrates the use of Web Services in DFB. The information about a DFBs
service is stored in a discovery registry or a catalogue of DFBs. This allows a radio node
to easily discover its nearest DFB, and allows the correct Web Services interface to be retrieved as a Web Services Description Language (WSDL) file. The communication between
nodes and DFBs is proposed to use SOAP (originally an acronym for Simple Object Access
Protocol) messages, using XML.
In Section 3.3.1.2, the three other most important conceptual spectrum broker contributions were reviewed. Compared to one of these, DIMSUMnet [88], the DFB defines a
full hierarchy of brokers for the management of all radio nodes, as well as the use of Web
Services for convenient communication and discovery of brokers. Seen in relation to the
Ofcom DSA Candidate Architecture [97], in addition to the same differences as for DIMSUMnet, the DFB is node or network lease oriented rather than call oriented. Compared
to DSAP [89], in addition to the same points as for DIMSUMnet, the DFB concept has an
ambition to manage all the use of spectrum, rather than brokering on a more local level as
does DSAP.
4.3.1.2
Paper E: A Comparison of Centralized, Peer-to-Peer and Autonomous
Dynamic Spectrum Access in a Tactical Scenario
Tore Ulversøy, Torleiv Maseng, Toan Hoang and Jørn Kårstad, ”A Comparison of Centralized, Peer-to-Peer and Autonomous Dynamic Spectrum Access in a Tactical Scenario”,
MILCOM 2009, Boston, October 18-21, 2009
Note: This section reviews the DSA architecture comparison part of Paper E. See also
Section 4.3.2.
60
4.3. Research Goal 3, Dynamic Spectrum Access Architectures and Algorithms
Introduction and Assumptions
The centralized, distributed and autonomous DSA architectures all have their advantages
and disadvantages. This is the case for commercial wireless communication, and even more
so for military communication where there are even more factors that need to be taken
into account. A goal of Paper E is to provide a comparison of these architectures. For
the comparison of decisions, complexity and traffic, an n-link interference model has been
used. It is further assumed that the available radio spectrum may be divided into a number
of spectrum segments over which the interference and noise are assumed to have constant
power spectrum density. Furthermore, it is assumed that link bit rates in each segment can be
calculated as the information capacity based on a modified signal-to-noise-and-interference
ratio in the segment.
Noise
TX1
Signal
+
RX1
Noise
TX2
Signal
+
RX2
F IGURE 4.16: A simplified illustration of the link-interference model, here with only two radio links.
Each of the receivers, in addition to the useful signal from the link transmitter, also sees background
noise and interference from the other link. The interference is treated as noise.
For the simulation of the Peer-to-Peer interaction algorithms, ideal 0-loss and 0-delay
information interchange between the links is assumed. In other words, the simulation model
does not include transmission delays due to the coordination messages, and also does not include any effects of capacity limitations or transmission errors of the coordination channels.
The comparison in hostile environments is generic and does not make assumptions as to
a specific model.
Summary of Contributions
This paper compares centralized, distributed and autonomous architecture in terms of
spectrum decisions, computational complexity in the radio nodes, the need for spectrum
coordination traffic between nodes as well as in terms of the special considerations that
apply in hostile environments.
For the spectrum decisions comparison, three different autonomous policies are analyzed
and the number of operational links above a minimum bit rate and the resulting average bit
rate per link are plotted. These are compared to two different distributed algorithms that
were developed and simulated, and it is shown that by allowing administrative communication between the links in this way, higher average bit rates per link can be achieved at a
requirement of 100% operational links, than in the autonomous policy governed cases. For
the centralized architecture, theoretically optimum spectrum decisions can be found (as re61
4. R ESULTS AND I MPLICATIONS
ferred to the model) and hence better (or equal) decisions than than in the distributed and
autonomous architectures can be made. Due to the very high computational complexity,
however, finding these optimum decisions becomes impossible as the number of links increases.
As to the comparison of coordination traffic, the autonomous architecture per definition
manages without coordination traffic. When doing exact computations in the centralized
model, this on the other hand requires new spectrum assignments to be communicated to
a number of links whenever a change is made or a link added, which means the traffic
demand will be prohibitively high if there are frequent changes. To reduce such traffic in the
centralized architecture, spectrum decisions need to be suboptimal such that the spectrum
assignment of only one or a few links are changed at a time.
For one of the proposed distributed algorithms, and in the simulated scenario, it is shown
that the coordination traffic decays towards zero traffic. Such a convergence towards zero
coordination traffic of course assumes that the node density relative to the available spectrum
allows all nodes to eventually reach at least their minimum data rate.
As to the comparison of computational complexity, the autonomous links make their
own spectrum decisions based on their sensed spectrum readings, and hence this provides a
solution that scales fully with the number of links. In the distributed case, for the described
distributed algorithms, each node additionally will need to process the arriving messages
from other nodes.
When compared in a hostile environment, coordinated approaches will have an advantage in their ability to do collaborative surveillance, defensive and offensive actions. The
coordination channels represent a vulnerability though, and proper protection measures are
needed. It is beneficial if DSA systems have graceful degradation, such that they can perform as autonomous nodes if the coordination channels are no longer available.
Relative to related work of making comparisons of DSA architectures, see Section
3.3.2, Paper E bases the comparison on the link-interference model, makes the comparison
using the decision, traffic and complexity categories, and extends the discussion, compared
to most papers, by including hostile environments.
Discussion
The comparison of the architectures mainly uses the simplex interference link model,
and from time and space considerations the paper does not go into comparisons for other
types of network models. From the same considerations, the paper goes in more detail on
spectrum decisions rather than on node-to-node communication, computational complexity,
or DSA considerations for a tactical environment. The work may naturally be extended
along these lines.
The conclusions for the autonomous and distributed cases are based on computer simulations in Matlab, with the model and idealized assumptions as discussed earlier. The
validity of this simulation model and the validity of the idealized assumptions are discussed
in Section 4.3.2, along with the general discussion on the algorithms developed in Paper E.
62
4.3. Research Goal 3, Dynamic Spectrum Access Architectures and Algorithms
4.3.1.3
Additional Dynamic Spectrum Access Work: The Peer-to-Peer Spectrum
Broker
Introduction
From the discussion of DFB and also that in Paper E, it is clear that a disadvantage
of centralized approaches is the computational complexity of optimum or near-optimum
spectrum decisions. For a system attempting to manage large parts of the spectrum with
national or global coverage, a lot of infrastructure is required such as spectrum servers.
At the same time, since the coming generations of wireless equipment will increasingly
use SDR technology, the equipment itself will have computational power that it is tempting
to use for the spectrum decision operations. Wireless equipment will increasingly also be
connected to the Internet, which means that IP communication may be used as a standard
for spectrum decision administrative messaging.
In recent years, Peer-to-Peer (P2P) distributed systems have become increasingly popular, and well-known for distributed data storage and file sharing purposes. In P2P systems,
the nodes in the system are equal peers instead of being clients or servers. In the third
generation P2P systems, the functionality and the distributed data are entirely in the peers
themselves, without the need for index servers or other infrastructure assistance elements.
Inspired by such third generation P2P data storage systems, an attempt has been made to
reformulate the centralized spectrum broker into a distributed P2P functionality. An initial
formulation of the P2P Spectrum Broker concept has been compiled in [128] (unpublished
work), of which a short summary is provided here. Development of a simulator for the
P2P broker was also started, but this was found to be too time consuming relative to the
remaining time available for this Ph.D. work and hence deferred to future work.
Short Description of the Concept
Radio
ad o node
ode
Network
Device
P2P SB
agent
Radio
Application
Radio
Devices
A radio node is defined in the same way as in Paper C. The spectrum broker functionality
is distributed onto all the radio nodes, by having each radio node contain a P2P agent, see
Figure 4.17. Each agent upon activation obtains a unique address in the P2P network, in a
P2P overlay address space.
Radio Node
Network
N
t
k
Device
P2P Spectrum
S
t
Broker Agent
Radio
R
di
Application
Radio
R
di
Devices
F IGURE 4.17: Each radio node contains a P2P Spectrum Broker agent. Each agent communicates
via wired Internet connection (solid green line) or wireless (red dash-dot) network connection.
A P2P command interface is defined, such that defined spectrum request and release
RNsent to the other peers in the network. From each participating peer, the idea is
actions are
Network
P2P SB
Radio
Radio
that a spectrum
coordination
action will
appear as if the coordination is done by a central
D i
Device
agentt
A li ti
Application
D i
Devices
entity.
The permission to use a portion of the spectrum for a given time period and in a given
region, is termed a Spectrum Object. Each Spectrum Object also holds an address in the
63
4. R ESULTS AND I MPLICATIONS
overlay space, and is stored in the relevant radio node, or replicated onto more radio nodes
for redundancy.
In order for the coordination communication to avoid flooding the entire network and
causing a communication breakdown, the coordination is limited to the relevant set of peers
holding the relevant set of Spectrum Objects. Two ways of addressing these relevant sets are
described:
TABLE 4.1: An example of a multidimensional overlay address construction for the structured P2P
approach.
Longitude
minimum:
20 bits
Longitude
maximum:
20 bits
Latitude
minimum:
20 bits
Latitude
maximum:
20 bits
Spectrum
interval,
low frequency:
40 bits
Spectrum
interval,
high frequency:
40 bits
Time,
start:
32 bits
Time,
stop:
32 bits
Node
hash:
40 bits
If mainly fixed position nodes are assumed, an overlay address that includes the geographical position may be used. An example of such a multidimensional address3 is provided in Table 4.1. The relevant set may in this case be addressed by a multidimensional
search [130] in the P2P overlay space.
In the second alternative, the overlay address is of a conventional distributed hash table
type. In this case, each peer maintains a lookup table with the addresses of all P2P radio node
agents that are relevant to coordinate with. Inspired from [131], the lookup table is updated
by listening to advertisments from peers announcing a successful ’claim’ of an Spectrum
Object, and storing such relevant announcements.
A main advantage of the P2P Spectrum Broker is that it may quickly be put in action, also
as add-ons to radio-systems. Another advantage is that it provides for computing workload
scalability since spectrum decision calculations may be distributed on all the radio nodes.
The P2P distributed spectrum broker concept introduces new challenges relative to
those of a centralized spectrum broker, including: Updating spectrum policies becomes
more difficult, as does also obtaining fairness in spectrum leases. The approach also puts
more strain on network communication capacity, since in general the spectrum inquiries
need to be communicated to a number of peers. Further the dynamics of appearing and
disappearing nodes need to be handled.
Summary
A benefit of a P2P Spectrum Broker system is the avoidance of the cost of a dedicated
infrastructure. Further, the sharing of the workload of spectrum decision calculations among
a number of equal-peer radio nodes creates a system that scales well in terms of workload.
Another advantage is seamless transitions between spectrum occupancy calculations in different geographical regions.
Spectrum authority control of the spectrum usage is obviously more difficult than in the
central broker case. Further challenges are the required network communication capacity
for P2P spectrum handling and administrative traffic, and reliability concerns due to failing,
disappearing and disconnected nodes.
3
With reference to the time columns in the address, a possible 32 bit time format is the standard Unix time
format. This uses a signed 32 bit representation with a zero reference time of January 1 1970 and a resolution
of one second. The software will need to handle the wraparound that will occur in year 2038 [129].
64
4.3. Research Goal 3, Dynamic Spectrum Access Architectures and Algorithms
The P2P broker system additionally to the peer interaction may benefit from sources of
indirect information, as explained in the next section.
A P2P distributed broker appears to be an attractive concept to pursue further. The
development of a P2P spectrum decision and communication overhead simulator, as well as
a prototype implementation, are suggested as future projects.
4.3.1.4
Sidenotes on Hybrid Architectures and Recent Developments
In the Dynamic Frequency Broker concept, the administrative communication in the system
is between the radio nodes and the broker, with radio nodes sending requests to and receiving leases from the broker. In the P2P Spectrum Broker concept, on the other hand, the
administrative communication is directly between the peers, through direction interaction in
the form of message passing.
Making a simple parallel to interaction in human societies, the first case is equivalent
to the workers always asking the manager, or the governing authorities, for the necessary
resources and permission whenever a new task is to be carried out. Common known disadvantages are that the manager tends to be busy or unavailable so the workers have to wait
idle now and then. While efficiency may suffer, there is system and order.
The second case is equivalent to negotiating with colleagues to get the necessary resources or acceptances for carrying out the new task, without any intervention by management. The disadvantages here are that there could be many people that one would need to
talk to, information could be uncertain or fuzzy and it is difficult to know if all the relevant people have been consulted. An obvious improvement is that there are no managerial
bottlenecks, but it requires more worker skills to negotiate and interpret the coordination
information correctly, and sometimes due to the fuzzy information the coordination effort
will need to be repeated.
For general multiagent systems, and inspired by biology, the benefits of having indirect
interaction in addition to direct interaction between agents [132], has been pointed out. Here,
direct interaction is defined as ”interaction via messages, where the identifier of the recipient
is specified in a message” [132]. Indirect interaction is defined as ”interaction via persistent,
observable changes to a common environment; recipients are any agents that will observe
these changes” [132]. A commonly used biology example of indirect interaction is ant
colonies, where the ants leave pheromones on their way to find food. With ants following
a combination of pheromones and the scent of food, trails leading to good food will be
reinforced.
Translating from a general multiagent system environment to radio agents in an electromagnetic environment, we can view the interference from the other radio links, sensed in
the autonomous architecture and in the distributed architecture in Paper E, as such indirect
interaction. The spectrum decisions in the radio links causes an aggregated interference that
is observed by each link. A property of this indirect interaction is, however, that the flow of
information has a directional character; it informs the receivers what the aggregated behavior of the transmitters are, but the aggregated performance of the receivers (or links) are not
revealed. The resulting issues, e.g. weak links or hidden receivers, are discussed in several
places in this dissertation. Another issue is that it is difficult to reveal the spectrum priority
of the various users only through interference sensing.
In the distributed interaction algorithms in Paper E, see Section 4.3.2.2, information flow
in the other direction, from receiver (or link), has been taken care of. Here, there are broad65
4. R ESULTS AND I MPLICATIONS
cast messages from links not reaching their minimum rates, within their geographical neighborhood, signalling to other links to reduce their use of spectrum (and optionally power).
The algorithm in Paper E restricts itself to discussing Open Sharing scenarios, however, and
not Hierarchical Sharing or priority of certain links.
An obvious and simple way of creating an environment with persistence and observability for indirect interaction, also taking into account receivers and weak transmission links as
well as Hierarchical Sharing, is establishing databases for spectrum use. This is the recent
path taken by the U.S. Federal Communications Commission (FCC) when allowing reuse
of the TV bands for secondary priority devices. The FCC requires that each secondary device has geolocation capability, and that it checks with a database about which channels can
be used at its location [133]. All services entitled to protection are to be registered in this
database, as are all fixed TV band secondary devices. FCC invited private companies to
administrate such a database [133]. Google is among the nine companies who have applied
to be a designated database manager.
With the database approach, the database is the element for indirect interaction, but
the actual decisions as to which spectrum to use are taken in a distributed manner in the
individual radio agent (or associated group of such agents). The database element thus
performs an easier task than the DFB does, for example.
Returning to the P2P broker system, this also may exploit the information from the
central database, in addition to its local coordination. Another hybrid option is for P2P radio
agents to carry their redundant pieces of such a database, such that the database is distributed
in the network of nodes, and only the relevant parts of it replicated to other regions.
In a recently co-authored paper [134], an architecture that incorporates both a centralized
database as described above, as well as negotiations in a P2P network, has been sketched.
The paper also contains a proposal for a frequency resource protocol for the interactions in
this architecture.
When the protection of primary or prioritized users is included, the currently most likely
way forward for DSA architectures are those that incorporate all the three elements: Distributed direct interaction, indirect interaction through interference sensing, as well as indirect interaction through centralized or distributed databases.
4.3.2
Dynamic Spectrum Access Algorithms
The DSA algorithms4 part of the third research goal is treated in Paper D and part of Paper
E.
4.3.2.1
Paper D: On Spectrum Sharing in Autonomous and Coordinated Dynamic
Spectrum Access Systems: A Case Study
Tore Ulversøy, Torleiv Maseng and Jørn Kårstad, ”On Spectrum Sharing in Autonomous
and Coordinated Dynamic Spectrum Access Systems: A Case Study”, Wireless VITAE ’09,
Ålborg, Denmark, May 17-20, 2009
4
As commented previously, the term algorithm has been used both when describing centralized calculations as well as distributed and autonomous calculation processes. While each autonomous or distributed agent
may do its decision calculation according to an algorithm, a better term for the total process of calculating a
resulting state between a number of autonomous or distributed radio agents is interactive computation.
66
4.3. Research Goal 3, Dynamic Spectrum Access Architectures and Algorithms
Introduction and Assumptions
In a centralized DSA architecture, all spectrum-decision relevant information about individual nodes and links is aggregated in a central infrastructure database. This database
could be on a single central server, or a distributed database that from the perspectives of
nodes and links appears to be a central database. The centralized information makes it possible to do an optimization in this central infrastructure element to find optimum spectrum
power density assignments. In paper D, maximum sum capacity is used as the optimization
criterion. The centrally calculated optimum is termed the Global Optimum.
A drawback of the Global Optimum is that exact solutions have prohibitively high computational complexity. Additionally, in autonomous cases where the link condition information is not aggregated in a central infrastructure, and assuming that only the link’s own state
and the accumulated noise plus interference are known by each link, global optimization is
not possible. Treating the accumulated noise plus interference as a constant, the local optimization of each link may be found by the method of Lagrange multipliers. This leads to the
well known Waterfilling solution [24, 108, 109]. The spectrum power density assignment
solution achieved by running the Waterfilling iteratively (Iterative Waterfilling) for each link,
is referred to here as the Competitive Optimum [24].
The Competitive Optimum is often referred to as a computationally tractable approximation for the Global Optimum. Paper D investigates the relation between the Competitive
Optimum and Global Optimum for illustrative link deployments, and discusses the implications on DSA architectures.
It is assumed that the transmitters and receivers can be modeled as simplex, interfering
links. The radio spectrum is divided into spectrum segments over which the interference
and noise are assumed to have constant power spectrum density. Furthermore, it is assumed
that the link bit rates in each segment can be calculated as the theoretical information
capacity based on the modified signal-to-noise-and-interference ratio in the segment, where
the modification of the signal-to-noise-and-interference is to account for the difference
between theoretical information capacity and practically achievable bit rates.
Summary of Contributions
Based on the receiver-centric n-link interference model, and using the informationtheoretic view, the paper compares ideal centralized power density assignments (Global
Optimum assignments), with optimum selfish autonomous power density assignments (referred to as Competitive Optimum assignments). Two different variants of the Competitive
Optimum solution are studied, one in which optimal target bit rates are known beforehand,
such that these can be used as target rates for the Iterative Waterfilling in each individual
link. For the other variant, it is assumed that optimal rates are not known beforehand, thus
the autonomous links run with aggressive target bit rates.
A detailed case study is conducted for a numerical example with two links and two
spectrum segments, comparing the Global Optimum and Competitive Optimum solutions,
for a low interference, high interference and asymmetrical interference deployment. It is
shown that for conflicting links with high symmetric or asymmetric interference, the Competitive Optimum solution may be significantly worse than the Global Optimum one. It
is also shown, in the asymmetric interference case, that using an a priori known optimal
Competitive Optimum target rate made the Competitive Optimum solution equal the Global
Optimum one.
67
4. R ESULTS AND I MPLICATIONS
The results are related to the different DSA architectures, and suggestions for practical
policies and how to improve on the Competitive Optimum solution, are made.
Discussion
The conclusions in Paper D are based on a calculation model using information theory
and the assumptions listed previously. From a research methodology point of view, such
calculated model-based results of course do not strictly fulfill the ’measure’ step, refer to
Section 1.3.3. Thus, measurements on a real-world system would have been an improvement, but were outside the ambitions of this work. An advantage with the simple model is
that it serves to provide a clear understanding between the starting point and the results.
A fundamental assumption for the treatment of the autonomous links in Paper D is that
each receiver observes the accumulation of the background noise and the interference from
other transmitters. The power density versus frequency is calculated in each step by the Waterfilling solution, assuming that the background noise plus interference is constant. Each
link does not get any coordination messages from other links, and does not attempt to analyze the measured interference to gain knowledge about other individual links.
It should be noted that uncorrelated noise levels betwen the frequency segments of different links will tend to result in fairer Competitive Optimum solutions between different
links.
Also, if the basic assumptions are changed, better power density allocations may in some
cases be found than the pure Competitive Optimum solution. Such changes of assumptions
can be that the links are allowed to learn from their power density allocations, draw conclusions about the influence of their power density allocations on other links, analyze the
interference to gain knowledge about other links in the vicinity or receive coordinating communication from other links. Game theory [18, 42, 135] can be used in such cases to analyze
the problem further.
Game theory is a set of models and analytical tools for the analysis of interacting
decision-making entities. Examples of such entities are players in a poker game and the
market participants in an economic system.
Game theory has become popular in recent years in modeling interacting agents in wireless systems, and including in the modeling of cognitive spectrum access agents.
A game has three basic components: A set of players, a set of actions and a set of
preferences [135]. The actions are the alternatives available to each player. In a Dynamic
Spectrum Access example, such actions may be transmit power levels and spectrum intervals. The preferences represent the player’s evaluation of the action outcomes. Relationships
between preferences are typically expressed as utility functions.
One goal of game theory is to predict what will happen when the game is played [135].
One such common prediction is the Nash Equilibrium, which is an action profile at which
no player has an incentive for deviating unilaterally.
The ’high interference’ scenario from Paper D is revisited as an example. As in the paper,
two segments and two links are taken into consideration. Each of the links has four action
alternatives, either to not transmit (00), transmit in segment 1 (10), transmit in segment 2
(01) or transmit in both segments (11) with the available power shared between the two
segments. The utility function is taken as the calculated achievable data rate according to
Equation D.4. The game is illustrated in what is termed ’strategic form’ [135] in Table 4.2.
The utility function values are presented in (player 1, player 2) pairs in the table. It is seen
68
4.3. Research Goal 3, Dynamic Spectrum Access Architectures and Algorithms
in this example that there are multiple Nash Equilibriums. Two Nash Equilibrium examples,
where it is readily seen from the table that there is no incentive for any of the players to
unilaterally deviate, are the points (10,01) and (01,10). These two points give reasonably
fair solutions between the two radio links. However, it is seen that the inefficient point
(11,11) is also a Nash Equilibrium and a possible end point of the game.
TABLE 4.2: The ’high interference’ game with two players each capable of using two frequency
segments, in strategic form. The actions of player one (see text) are in the rows, and those of player
two in the columns. The table cells are the utility function values of player one and player two
respectively. The utility function in this example is the calculated data rate according to Equation
D.4.
Player 1
00
01
10
11
00
(0,0)
(77.04k,0)
(77.04k,0)
(112.14k,0)
Player 2
01
10
(0, 87.48k)
(0, 87.48k)
(0.43k,0.40k)
(77.04k,87.48k)
(77.04k,87.48k) (0.43k,0.40k)
(56.29k,0.798k) (56.29k,0.798k)
11
(0,131.06k)
(0.85k,65.7k)
(0.85k,65.7k)
(0.86k,0.80k)
Playing the game repeatedly and including punishment strategies is a way of potentially
avoiding the game ending in such inefficient points. If player 1 selects action (11), player 2
may punish player 1 by also selecting (11). In the next round of the play, player 1 will then
know that playing (11) is unwise, and instead select an action that it believes it will not be
punished for, e.g. (10).
This repeated playing line of thought may be extended into longer-term strategies and is
applicable also for a general scenario with mobility and users entering and leaving. Here,
although unreasonable occupancy of spectrum resources may be individually rewarding and
not lead to punishment on a short time scale, players may take into account the longerterm negative effects of such actions. Such ’foresighted users’ [136] are ones that seek to
maximize their rewards long-term.
4.3.2.2
Paper E: A Comparison of Centralized, Peer-to-Peer and Autonomous
Dynamic Spectrum Access in a Tactical Scenario (continued)
Tore Ulversøy, Torleiv Maseng, Toan Hoang and Jørn Kårstad, ”A Comparison of Centralized, Peer-to-Peer and Autonomous Dynamic Spectrum Access in a Tactical Scenario”,
MILCOM 2009, Boston, October 18-21, 2009
Introduction and Assumptions
This section deals with the DSA-algorithms related part of Paper E (the architecture
comparison part was discussed in Section 4.3.1.2).
The same n-link interference model and information-theoretical view as in Paper D is
employed, but simulations with a greater number of randomly deployed links and greater
number of frequency segments are used.
As explained in Section 4.3.1.2, for simulation of Peer-to-Peer interaction algorithms,
ideal 0-loss and 0-delay information interchange between the links is assumed. That is, the
effects of latency and throughput limitations of the coordination channels are not included
in the model.
69
4. R ESULTS AND I MPLICATIONS
The Ideas
Core contributions in Paper E are the distributed interactive computation algorithms.
The starting point for the computation is the solution that is reached collectively by the
system of links when they each optimize their power allocation in each segment in order to
selfishly reach their optimum data rate, the already mentioned Iterative Waterfilling solution.
The ’Waterfilling’ term refer to the nature of the solution, a ’filling level’ which the transmission power, added to the noise and interference translated to the transmitter side, should
be filled up to.
The possible problems with these power density allocations, when not having any feedback in the system of links, are discussed both in Paper D and Paper E. In short, some of the
links have their performances destroyed by interference from other links, without being able
to improve this situation by changing their own power allocation. The broad nature of the Iterative Waterfilling allocation, with power typically allocated in many segments, contributes
to making this problem worse.
The links that do not achieve their minimum data rate within their maximum transmit
power limitation, need access to more lower-interference spectrum in order to manage to
increase their performance. The idea behind the D1 interactive algorithm in the paper is to
create more room for the poor-performing links by having the other links not use as many
segments as they would normally do in the Iterative Waterfilling solution, but rather limit
themselves to using fewer segments, in a dynamic manner and as needed.
This principle is sketched in Figure 4.18, where link 2 does not reach its minimum
data rate and therefore, within a coordination distance, broadcasts a reduce message. If not
already at maximum, link 2 also increases its own allowed number of segments. If link 1
determines it is a significant contributor to the interference seen at the poor-performance
link (further details in the paper), link 1 reduces the maximum number of segments it can
use in the Iterative Waterfilling by 1.
In the second interactive computation algorithm D2, more room for poor-performing
links is created in the same way, but by both reducing segments and also target rate of the
interfering links.
Summary of Contributions
In the paper, three different autonomous policies are defined and analyzed, and the number of operational links above a minimum bit rate and the resulting average bit rate per link
are plotted. The first of these policies sets a uniform target bit rate for the links, the second sets a maximum number of frequency segments that the links are allowed to use, and
the third is one where the power is reduced proportionally to the maximum number of frequency segments. The one with a limitation on the number of frequency segments is shown
to be the best performing.
Two interactive algorithms where the links on an on-demand basis, as explained in the
section above, have administrative communication, are developed and simulated. It is concluded that by allowing administrative communication between the links in this way, higher
average bit rates per link can be achieved at a requirement of 100% operational links, than
in the autonomous policy governed cases.
For the centralized case, it is suggested to run the same distributed algorithms, but in the
centralized server instead.
Particularly considering their simplicity, the algorithms may be attractive candidates for
implementation.
70
4.3. Research Goal 3, Dynamic Spectrum Access Architectures and Algorithms
a)
Link 2
P. Dens.
Reduce
Link 1
segment
P Dens
P.Dens.
segment
b)
Link 2
Link 1
P. Dens.
segment
P Dens.
P.
Dens
segment
F IGURE 4.18: Principle for D1 distributed computation: Poor-performing links request other links
within a coordination radius to reduce the number of spectrum segments used in the Iterative Waterfilling.
Discussion Concerning Computational Scalability
In this section the scalability of the D1/D2 algorithms is discussed, from a computational
complexity viewpoint of the required calculations.
In a Dynamic Spectrum Access system implementing the D1/D2 algorithms, a high-level
breakdown of the required DSA radio client functionality is illustrated in Figure 4.19. The
D1/D2 functionality is illustrated inside the rectangle, and the interfacing modules outside
the rectangle. Here, the Send block sends the reduce messages when the data rate is too
low. The Receive and process block continuously monitors for incoming ’reduce’ messages,
processes these and once per waterfilling iteration feeds the resulting maximum segment
number (and in case of D2, target rate) to the Waterfilling block. The Waterfilling block at
this iteration rate uses the sensed spectrum data and segment limits (and in case of D2, target
rate) and runs the waterfilling calculation, then feeds the transceiver with the new settings.
With respect to the computational complexity of the D1/D2 DSA client, the dominant
parts in terms of peak workload will be the Waterfilling and the Receive and process. The
Send block will only occasionally send messages (a maximum of one per iteration).
The Waterfilling has a computational complexity dependency on the number of segments, M, as well as on the maximum allowed number of segments mi . It does not depend
on the number of links N, however, as each link does is own computation.
Figure 4.20 presents measurements of computation time per iteration as a function of
the number of segments (M) for a Waterfilling test implementation. The test implementation
71
4. R ESULTS AND I MPLICATIONS
D1/D2:
Send
Receive and
process
Coord.
channel
interface
mi
Rtarget
Spectrum
sensing
Waterfilling
g
TX & RX
F IGURE 4.19: A breakdown of the required blocks in a radio client implementation of D1/D2 DSA,
together with interfacing blocks.
is run as a Matlab program on a computer with a P8600 2.4GHz processor. The maximum
allowed number of used segments in the Waterfilling is set to mi = M/2. The computation
time is seen to approximately scale linearly for small M and to the second order for large M.
Hence, from a computational point of view, the Waterfilling part scales well to large numbers
of segments.
Comp. time per waterfilling‐iteration [msec
aterfilling‐iteration [msec]
Computation Time per Waterfilling‐Iteration Versus Number of Segments 20
y = 3,1E‐06x2 + 1,4E‐02x + 3,9E‐02
18
16
14
12
Computation time per waterfilling‐
iteration [msec]
10
8
S
Second order polynomial d d
l
i l
approximation
6
4
2
0
0
200
400
600
800
1000
1200
M
F IGURE 4.20: Measurements of the computation time of one iteration of a test implementation of
the Waterfilling block, as a function of the number of spectrum segments, M. The maximum allowed
number of used segments in the algorithm is set to mi = M/2 for all the measurements. Secondorder fitted line also shown.
The Receive and process block receives and processes the messages from other links,
and thus has a computational complexity dependency on the number of received messages.
For each message, a calculation of the interference to the link that sent the reduce message,
is needed. Assuming only one permitted message per Iterative Waterfilling interval per link,
72
4.3. Research Goal 3, Dynamic Spectrum Access Architectures and Algorithms
the computational complexity worst-case scales linearly with the number of links within the
coordination radius. Typically, though, the number of messages that needs to be handled
will be much less than the number of links within the coordination radius. From both these
arguments, this computation will scale well to large numbers of links.
Discussion Concerning Coordination Messages
The infinite capacity assumption of the coordination channels, made in the simulations,
is likely to be a good approximation in cases with low link densities, where few coordination
messages are necessary.
With high link densities, the effects of the volume of coordination messages exceeding
the capacity of the coordination channels is not accounted for in the simulations, and in a
real system under these conditions may cause messages to get lost and links to not achieve
their minimum data rates. Several parameters can be adjusted to minimize the probability of
such loss of coordination messages: An obvious one is the dimensioning of the capacity of
the coordination channel, as is also the density of links sharing the spectrum band and the
value of the minimum link data rate. By applying the maximum number of segments policy
(A2 in Paper E) at the same time as running the distributed algorithm, this will cause fewer
cases where coordination messages are needed.
Coordination messages are assumed to have zero delay in the model. With slow rates
of recalculation of spectrum assignments, say recalculation every minute, actual message
delays will be small relative to the recalculation period and the effects of real message delays
are likely to be negligible.
For fast recalculation rates in a real system, there is a risk that coordination messages
caused by the last recalculation have not arrived at the time of the next recalculation. Such
delayed messaging under conditions of frequent spectrum assignment recalculation will
need further study. A conservative approach will be to maintain a recalculation period
which is well above the maximum coordination message propagation delay.
Further Discussion Points and Recommendations for Further Work
The results and comparisons of the algorithms are based on Matlab computer simulations, using the model and theoretical assumptions as listed above. Simulation does not
suffice as scientific evidence that an algorithm works in a real-world environment, but it
may provide support for the design and suggest that further investigations in real-world
environments are recommendable. Hence a further recommended step is prototyping and
measurements in real environments. Such prototyping was, however, outside the scope of
the project in terms of available time and equipment. The results from the simulations indicate optimism, however, that the algorithms, when adapted to practical systems, will work
in a real environment.
For the centralized case, the paper suggests using the distributed algorithm, running
internally in the spectrum broker. A suggestion for further work on the centralized case
is to apply the autonomous case Competitive Optimum and/or the result from running the
distributed algorithm in the broker as starting points for further optimization using a genetic
algorithm. Genetic algorithms are inspired by biological evolution and use elements such as
inheritance, crossover, selection and mutation.
73
Chapter 5
Conclusions and Recommendations for
Further Work
This chapter first revisits the research goals, followed by a review of the major contributions
of this thesis. Some critical remarks are then discussed, and finally ideas for further research
are provided.
5.1
Revisiting the Research Goals
The overall motivation for the work presented in this thesis has been to contribute to the
evolution of radio systems from fixed waveform functionality + static frequency use to ones
with loadable waveform functionality + dynamic spectrum access. Three main areas were
focused on, and three research goals formulated:
• To assess the degree to which the conceptual challenges of Software Defined Radio
have been solved and what challenges still remain in the fields of software architecture,
computational requirements, security, regulations and business structure.
• To analyze the workload overhead of Software Communications Architecture-based
Software Defined Radio in a specific processing configuration.
• To contribute to establishing practical ways of dynamically sharing the frequency
spectrum and in particular compare autonomous, distributed and centralized spectrum
sharing architectures.
5.2
Major Contributions
The contributions presented in this thesis have been published in five research papers, that
are included in Part II of the thesis. Some further follow-up work has been included in Part
I of the thesis. The work in the thesis has also provided part input to two other co-authored
research papers, as well as to two (multi-authored) NATO Research Technical Organization
reports, one within Software Defined Radio and the other on Cognitive Radio, as listed under
’Other Co-authored Publications’.
75
5. C ONCLUSIONS AND R ECOMMENDATIONS FOR F URTHER W ORK
5.2.1
Conceptual Challenges of Software Defined Radio
This work is presented in Paper A, which is published in IEEE Communications Surveys
& Tutorials, and is available for open access download. The publication is the only SDR
tutorial in this journal, and may serve as a broad introduction to SDR and the remaining
challenges of SDR, for engineers and scientists in this field of work.
These are a subset of the conclusions made in the paper:
Within SDR software architecture, the paper points to the still remaining challenges
in the application portability of Software Communications Architecture-based applications,
with the major issue being that of the standardization of application interfaces between the
waveform application and the platform services and devices, and another issue being the
different approaches to the communication with special processors. The paper also discusses
alternative architectures to SCA, as well as alternative middleware to CORBA.
The paper points to the size, weight and power consumption challenges of providing
sufficient computational processing capacity for future handheld multi-waveform terminals.
Some recent multiple-SIMD processors are pointed to as particularly promising for handheld
devices.
While security of SDR systems is a well-studied subject, it is still one which will require
attention from security organizations and engineers in the years to come, particularly for
military systems. Standardization of security architectures, or at least the application interfaces for them, in order to also have portability for the security related code, are challenging
issues.
In terms of regulatory certification of SDR equipment, changes have already taken place,
but the paper points to further evolvement of certification rules that are needed.
In the near term, the largest potential for SDR systems are seen, in addition to the military
radio communication domain where there are ongoing programs, in the cellular infrastructure market, where the rapid upgrades of standards and the fewer restrictions on power, size
and weight point in favour of SDR. Satellite communications is another area which appears
particularly promising for SDR.
A change in business structure is predicted, into SDR platform providers and third-party
software providers, which may lower the threshold for companies entering the SDR market
as software providers, and hence provide further competition in the SDR software applications area.
5.2.2
Workload Overhead of SCA-based Software Defined Radio
This work is presented in Paper B in Part II, and which is available for public download
from the Nato Research Technical Organization. Additional follow-up work is presented in
Section 4.2.2 in Part I.
The paper investigates the workload effects of varying the granularity of an SCA-based
application, when the application components are deployed in the default manner as independent components on a CORBA-enabled GPP. Workload measurements are carried out
using two different statistical profilers. Two different analytical models are also formulated,
where the first one does not include context switches but the second one does. When comparing the results from the first model with the measurements, it was found that this includes
the major part of the workload overhead. It underestimates the total workload of the system
though, and more so the larger the packets are. This is natural since it does not include the
76
5.2. Major Contributions
context switch losses. The larger the packet size, the higher the indirect cost related to the
context switch losses due to cache sharing between processes.
The paper demonstrates that the application granularity, as well as the frequency of the
inter-component communication needs to be taken into account when optimizing a GPPbased SCA application. In the review of related research it is pointed out that the literature
on workload overhead in SCA is sparse, and that the scope of this analysis provides a useful
contribution to the research literature on the subject. The results are beneficial as guidelines
on how to minimize workload overhead in SCA-based applications. In the follow-up work,
additional ways of optimizing SCA-based applications are discussed, both for GPP environments and for heterogeneous environments. This includes changing transports, such as
from TCP/IP to a Unix transport, and deploying components in a common address space in
a ResourceFactory and using an ORB that is able to exploit these opportunities.
5.2.3
Dynamic Sharing of Spectrum
This work is presented in Papers C through E in Part II, which are all available through IEEE
Xplore, as well as in Sections 4.3.1.3 and 4.3.1.4 of Part I.
5.2.3.1
The Dynamic Frequency Broker
A centralized approach to DSA, the Dynamic Frequency Broker, is presented in Paper C. The
idea here is a hierarchy of frequency brokers, where the lower level ones are responsible for
a geographic region and provide leases to radio nodes. Here, each DFB has a database of
radio nodes and spectrum assignments as well as a propagation calculation model based on
a topographical map. The DFBs are proposed to communicate with each other and the radio
nodes using Web Services, thus the nearest DFB may be found from a service registry. The
Dynamic Frequency Broker proposal complements other prior work on spectrum brokers.
Related to the DFB, Section 4.3.1.3 in Part I provides a conceptual proposal for a P2P
broker, where the broker functionality is distributed among the peers instead of requiring
central infrastructure.
5.2.3.2
Comparison of Dynamic Spectrum Access Architectural Concepts
The comparison of DSA architectural concepts is presented in Paper E. The comparison is
done along the following categories: Spectrum decisions, computational complexity, coordination traffic and considerations for hostile environments. For the first three categories,
the comparison uses a link interference model with assumptions as described in the next
section.
The analysis shows that the DSA architectural concepts all have advantages and disadvantages, but that much is in favor of an architecture that has distributed (decentralized)
interaction between the radio nodes. Vulnerability and security issues exist for the coordination channels that are needed for such interaction to take place, and these channels must
be protected. Also, it is favorable if systems can degrade gracefully from coordinated ones
to autonomous ones when the coordination channels are for some reason blocked.
In Section 4.3.1.4 further remarks are attached to the coordination communication in
the distributed architecture, pointing to the preferability of having combinations of local
77
5. C ONCLUSIONS AND R ECOMMENDATIONS FOR F URTHER W ORK
information exchange and indirect information such as in the form of aggregated information
from spectrum databases as well as sensed spectrum occupancy.
5.2.3.3
Dynamic Spectrum Access Algorithms based on Iterative Waterfilling
The work on DSA algorithms is based on a link interference model. The bandwidth B is
divided into M segments within which both the noise and interference are assumed to have
constant power densities. The algorithms and policies discussed are all modifications and
additions to the Iterative Waterfilling algorithm.
In Paper D, comparisons are made between the Competitive Optimum Iterative Waterfilling solution, both with aggressive target rates and optimum precalculated target rates,
and the Global Optimum solution, which is the optimum centrally calculated solution. The
comparisons are made in three different typical deployments. It is shown that the Competitive Optimum solution may deviate substantially from the Global Optimum in certain
deployments. Suggestions about how to improve from the Competitive Optimum solution
are discussed.
In Paper E, three autonomous policies and two distributed interaction DSA algorithms
are suggested and simulated for a simplex link interference model and under idealized conditions. The simulations show that the distributed interaction algorithms provide higher average bit rates over all the links compared to the autonomous policy ones, while at the same
time wanting to assure that each link obtains at least a minimum data rate. The policies and
algorithms may be useful as candidates for implementation in DSA systems.
5.3
5.3.1
Critical Remarks
Conceptual Challenges of Software Defined Radio
For completeness of the review of remaining SDR challenges, it would have been advantageous if Paper A had also covered analog hardware and front-end issues. Due to the extent of
this topic, however, this was not included in the research goal. The interested reader should
consult some of the references provided in Paper A and in Section 3.1.
5.3.2
Workload Overhead of SCA-based Software Defined Radio
An improvement of this work would have been a better verification of modeled versus measured data for the second derived model in Paper B, as only course estimates for the second
model were carried out, partly due to time limitations and partly due to some software tools
not being compatible with the operating system kernel on the particular computer platform.
The work could also have been improved by having more consistency in the presentation of
the results in relation to the accuracy of the measured data. There are several ways the work
can be further extended, as outlined further below.
5.3.3
Dynamic Sharing of Spectrum
The results of this research goal are based largely on numerical calculations and simulations,
as well as conceptual discussions referencing supporting literature. While simulation results
can provide support for the hypotheses that the simulation model is built on, and also be a
78
5.4. Recommendations for Further Work
way of finding new hypotheses, simulations of course cannot be considered as scientific evidence. Thus, going back to and as commented on in the relevant methodology section 1.3.3,
the simulation results and the analysis of those are not scientifically sufficient to check off
the ’measure and analyze’ steps in the engineering paradigm methodology, real-world measurements on realistic systems are needed. However, the resources needed to build such
experimental systems, in terms of available time and equipment, exceeded what was available, and such implementation was deferred to future work. An intermediary level, between
the idealized model and the real-world systems, would be finer scale simulations, with realistic waveforms and taking into account limited capacity and finite latency coordination
channels.
Related to the issue of real-world measurements, it should be added that there are some
voices in the DSA field of research that argue that one of the largest weaknesses in general
with current DSA research, is that too few realistic size DSA systems are being built and
tested.
5.4
5.4.1
Recommendations for Further Work
Workload Overhead of SCA-based Software Defined Radio
There are several possible extensions to the work on workload overhead of SCA-based SDR:
As mentioned above, further work is desireable on the model that includes the context
switching, both in terms of establishing methodology for estimating the parameters in the
model, and also in order to have further verification of the model.
The results in Paper B show that significant workload overhead is caused by mere data
copying and formatting, which are very parallel data operations. It is therefore very likely
that part of this workload overhead could have been eliminated if these operations were
carried out by dedicated hardware functionality. Investigation of such hardware acceleration
of these operations is suggested.
One of the most interesting activities in the area of SCA-based development, at the time
of writing, is the ongoing work on SCA Next, i.e. the next revision of the SCA specification.
One of the goals of this coming revision of SCA is to make it middleware agnostic, such
that other middleware can be used instead of CORBA. This makes research into alternative
middleware of great interest.
The measurements in Paper B were conducted on a single-core processor, while multicore is now the standard. The topic of optimal reduction of workload overhead on multicore
processors may be worth studying.
A further natural extension is towards workload optimization in a general heterogeneous
multiprocessor environment, including optimization of modules for special processors such
as FPGAs and DSPs, and investigations into suitable middleware for such processors.
5.4.2
Dynamic Spectrum Access Architectures and Algorithms
5.4.2.1
Experimental System
From previous discussions, it is beneficial to carry out further verification measurements of
the algorithms in Paper E on real prototype systems, which is suggested as further work in
DSA. The prototype systems need to have coordination channels such that the coordination
79
5. C ONCLUSIONS AND R ECOMMENDATIONS FOR F URTHER W ORK
messages of the distributed algorithms may flow in the system. Such prototype systems
should have a payload waveform that allows the segmented frequency model to be used.
One candidate waveform which conforms well to this model is Non-Contiguous Orthogonal Frequency Division Multiplexing (NC-OFDM), where the individual carriers may be
turned off, coded with different modulations and excited with different power levels. An
advantage with NC-OFDM and OFDM in general is a very efficient implementation. At
a penalty of a higher implementation complexity, several narrowband waveform channels
with individually varying power, coding and modulation may be used instead.
5.4.2.2
Evolutionary Computation
In the discussion in Section 4.3.1.2 it is suggested to use the autonomous case Competitive
Optimum and/or the result from running the proposed distributed algorithm in the broker as
seeds for a genetic algorithm in the broker. A genetic algorithm is a heuristic search type algorithm that works through evolving populations of chromosomes, where the operations on
the chromosomes typically are selection, mutation, inheritance and crossover. The genetic
algorithm is one form of evolutionary algorithms, which again is a subset of evolutionary
computation. Evolutionary computation comprises computational processes where there is
iterative progress, such as a refinement of a population over time, and is typically inspired
by biological processes.
Comparisons of seeded genetic algorithms versus non-seeded ones, in terms of the number of generations to reach the same average data rate, is suggested as further work in the
centralized broker case.
In the autonomous and distributed cases in Paper D and E, the distributed computation is
based on convex optimization principles. When analyzing more complex cases, optimization
problems often have a non-convex character, however. Also, it is often beneficial to include
several objectives in the optimization, such as e.g. data rate, delay, transmission power
and spectrum occupancy. In these cases evolutionary computation may be attractive as an
alternative approach also in the autonomous and distributed cases.
The approach of using evolutionary computation in the form of a genetic algorithm as
decision- and learning-aid in decentralized radio agents or Cognitive Radios has been pursued for some time [137], and recent publications [138, 139] has demonstrated that it is a
promising candidate technology for Cognitive Radios. A problem with evolutionary computation in portable devices, however, is that the algorithms are computationally demanding
and hence may be unacceptably slow [137], requiring focus both on careful selection of
algorithms and search spaces as well as on platform implementation issues.
For the autonomous and distributed cases, suggested research areas are the reduction
of computation time and parallel implementations of evolutionary algorithms on radio platforms, the seeding of the algorithms with populations believed to be close to the desired
working points [137], system convergence and stepwise refinement of the computations.
5.4.2.3
Other Suggestions
Several other suggestions for expansion of this work have been provided in the preceding
chapters:
The concept a distributed broker using a P2P overlay addressing layer, is interesting to
pursue with a simulation model and/or prototype implementation.
80
5.4. Recommendations for Further Work
There are several interesting areas related to the distributed interaction algorithm in Paper
E that deserves further study, e.g. a detailed study on the effects of message propagation
delays and message congestion in limited-capacity coordination channels. There are many
directions in which the work in Paper E can be expanded. One example is towards models
other than the link interference one. Another example is an expansion to include Hierarchical
Sharing and taking into account prioritized communication, e.g. by calculating a maximum
power density restriction for each of the frequency segments, based on database information.
81
References
[1]
D. Cichon and W. Wiesbeck, “The Heinrich Hertz wireless experiments at Karlsruhe
in the view of modern communication,” in International Conference on 100 Years of
Radio, 1995, pp. 1 –6, Sep. 1995.
[2]
W. Brinkman, D. Haggan, and W. Troutman, “A history of the invention of the transistor and where it will lead us,” IEEE Journal of Solid-State Circuits, vol. 32, no. 12,
pp. 1858 –1865, Dec. 1997.
[3]
J. Kilby, “The integrated circuit’s early history,” Proceedings of the IEEE, vol. 88,
no. 1, pp. 109 –111, Jan. 2000.
[4]
G. E. Moore, “Cramming more components onto integrated circuits,” Electronics,
vol. 38, no. 8, Apr. 1965.
[5]
E. Lee, “Programmable DSPs: A brief overview,” IEEE Micro, vol. 10, no. 5, pp. 14
–16, Oct. 1990.
[6]
M. Trope, “FPGAs: Past, Present, Future,” Department of Electrical Engineering and
Computer Science, University of Kansas, Research paper, May 2004.
[7]
F. Faggin, “The making of the first microprocessor,” IEEE Solid-State Circuits Magazine, vol. 1, no. 1, pp. 8 –21, 2009.
[8]
J. Mitola III, “Software radios - survey, critical evaluation and future directions,” in
National Telesystems Conference, 1992. NTC-92, pp. 13/15–13/23, May 1992.
[9]
J. Mitola III. HomePage of Joseph Mitola III, KTH Teleinformatics. (June 2007).
[Online]. Available: http://web.it.kth.se/∼maguire/jmitola/
[10] B. Tavli and W. Heinzelman, Mobile Ad Hoc Networks. Springer Netherlands, 2006,
pp. 31–46.
[11] C. Berrou, A. Glavieux, and P. Thitimajshima, “Near Shannon limit error-correcting
coding and decoding: Turbo-codes. 1,” in IEEE International Conference on Communications, 1993. ICC 93. Geneva. Technical Program, Conference Record, vol. 2,
pp. 1064 –1070 vol.2, May 1993.
[12] A. Kaye and D. George, “Transmission of multiplexed PAM signals over multiple
channel and diversity systems,” IEEE Transactions on Communication Technology,
vol. 18, no. 5, pp. 520 –526, Oct. 1970.
83
R EFERENCES
[13] R. Kirby, “History and trends in international radio regulation,” in International Conference on 100 Years of Radio, 1995, pp. 141 –146, Sep. 1995.
[14] D. W. Webbink, “Frequency spectrum deregulation alternatives,” Oct. 1980, FCC
working paper.
[15] J. Mitola III and G. Q. Maguire, Jr., “Cognitive radio: Making software radios more
personal,” IEEE Personal Communications, vol. 6, no. 4, pp. 13–18, Aug. 1999.
[16] A. Kaul, “Software defined radio: The transition from defense to commercial markets,” in 2007 Software Defined Radio Technical Conference and Product Exposition,
Nov. 2007.
[17] FCC Spectrum Policy Task Force, “Report of the spectrum efficiency working group,”
Tech. Rep., Nov. 2002.
[18] L. Berlemann and S. Mangold, Cognitive Radio and Dynamic Spectrum Access. Wiley, 2009.
[19] D. P. Johnson, “Dismounted Urban Tactical Communications Assessment / Urban
Spectrum Management,” in RTO-MP-IST-083 Military Communications with a Special Focus on Tactical Communications for Network Centric Operations, Apr. 2008.
[Online]. Available: ftp://ftp.rta.nato.int/PubFullText/RTO/MP/RTO-MP-IST-083/
Supporting%20Documents/MP-IST-083-10.pps
[20] P. Marshall, “DARPA progress towards affordable, dense, and content focused tactical
edge networks,” in Military Communications Conference, 2008. MILCOM 2008., pp.
1 –7. IEEE, Nov. 2008.
[21] W. Tuttlebee, Software Defined Radio: Enabling Technologies.
2002.
Chichester: Wiley,
[22] United States Government Accountability Office, “Restructured JTRS Program Reduces Risk, but Significant Challenges Remain,” Sep. 2006.
[23] J. Mitola III, “Cognitive Radio An Integrated Agent Architecture for Software Defined Radio,” Doctoral dissertation, Royal Institute of Technology (KTH), SE-164 40
Kista, Sweden, May 2000.
[24] W. Yu, “Competition and Cooperation in Multi-User Communication Environments,”
Ph.D. dissertation, Stanford University, June 2002.
[25] IEEE Xplore Digital Library. [Online]. Available: http://ieeexplore.ieee.org/
[26] SpringerLink. [Online]. Available: http://www.springerlink.com
[27] The ACM Digital Library. [Online]. Available: http://portal.acm.org/dl.cfm
[28] Google scholar beta. [Online]. Available: http://scholar.google.no/
[29] Google. [Online]. Available: http://www.google.no/
84
References
[30] W. Tuttlebee, Software Defined Radio: Origins, Drivers and International Perspectives. Chichester: Wiley, 2002.
[31] ——, Software defined radio: Baseband Technologies for 3G Handsets and Basestations. Hoboken, N.J.: Wiley, 2004.
[32] P. B. Kenington, RF and Baseband Techniques for Software Defined Radio. Boston:
Artech House, 2005.
[33] S. Singh, M. Adrat, M. Antweiler, T. Ulversøy, T. M. O. Mjelde, L. Hanssen, H. Özer,
and A. Zümbül, “NATO RTO/IST RTG on SDR: Acquiring and sharing knowledge
for developing SCA-based waveforms on SDRs.” Information Systems Technology
Panel Symposium IST-092/RSY-022 on Military Communications and Networks,
Sep. 2010.
[34] RTG on SDR, “NATO RTO/IST RTG on SDR Final Report,” Tech. Rep., Dec. 2010,
submitted to NATO RTO/IST.
[35] NATO Military Agency For Standardization, “Stanag 4285 (edition 1),” Feb. 1989.
[36] VirginiaTech. OSSIE development site for software-defined radio. (Dec. 2007).
[Online]. Available: http://ossie.wireless.vt.edu/trac
[37] V. S. W. Eide, “Exploiting event-based communication for real-time distributed and
parallel video content analysis,” Ph.D. dissertation, University of Oslo, June 2005.
[38] P. J. Denning, D. E. Comer, D. Gries, M. C. Mulder, A. Tucker, A. J. Turner, and P. R.
Young, “Computing as a discipline,” Communications of the ACM, vol. 32, no. 1, pp.
9–23, 1989.
[39] R. L. Glass, “A structure-based critique of contemporary computing research,”
Journal of Systems and Software, vol. 28, no. 1, pp. 3 – 7, 1995. [Online]. Available: http://www.sciencedirect.com/science/article/B6V0N-3YGV297-K/
2/8e9216c77711e3d85d79fd235b399cf1
[40] V. Ramesh, R. L. Glass, and I. Vessey, “Research in computer science: An
empirical study,” Journal of Systems and Software, vol. 70, no. 1-2, pp. 165
– 176, 2004. [Online]. Available: http://www.sciencedirect.com/science/article/
B6V0N-49N06GS-K/2/d914c450d030acabbdb8ef9e219ef63e
[41] P. J. Fortier and H. E. Michel, Computer Systems Performance Evaluation and Prediction. Amsterdam: Digital Press, 2003.
[42] B. A. Fette, Cognitive Radio Technology.
Amsterdam: Newnes/Elsevier, 2006.
[43] Wireless Innovation Forum. Wireless Innovation Forum. (2010). [Online]. Available:
http://www.wirelessinnovation.org/
[44] Federal Communications Commission, “In the Matter of Facilitating Opportunities
for Flexible, Efficient, and Reliable Spectrum Use Employing Cognitive Radio Technologies, Report and Order,” Mar. 2005, FCC 05-57, ET Docket No. 03-108.
85
R EFERENCES
[45] Texas Instruments. Single-chip digital signal processor (DSP) announced.
(Nov. 2009). [Online]. Available: http://www.ti.com/corp/docs/company/history/
interactivetimeline.shtml
[46] B. Davies and T. Davies, “The application of packet switching techniques to Combat
Net Radio,” Proceedings of the IEEE, vol. 75, no. 1, pp. 43–55, Jan. 1987.
[47] Analog Devices. A/D Converters. (Dec. 2009). [Online]. Available: http://www.
analog.com/en/analog-to-digital-converters/ad-converters/products/index.html
[48] Texas Instruments. (Dec. 2009). [Online]. Available: http://www.ti.com/
[49] A. A. Abidi, “The path to the software-defined radio receiver,” IEEE Journal of SolidState Circuits, vol. 42, no. 5, pp. 954–966, May 2007.
[50] A. Gatherer, “Market and technology drivers for cellular infrastructure modem development,” Wireless Personal Communications, vol. 38, no. 1, pp. 43–53, June 2006.
[51] Joint Program Executive Office Joint Tactical Radio System, “JPEO JTRS Announces
’Business Model’ for Certifying JTRS Radios,” Jan. 2007.
[52] Defence Update. Harris, Thales Compete Multi-Billion JTRS Radio Procurement.
(June 2007). [Online]. Available: http://www.defense-update.com/newscast/0607/
news/240607 jtrs.htm
[53] Vecima Networks Inc. Spectrum Signal Processing. (2009). [Online]. Available:
http://www.spectrumsignal.com
[54] Lyrtech Inc. (2009). [Online]. Available: http://www.lyrtech.com
[55] Vanu Inc. Where SOFTWARE Meets the Spectrum. (2010). [Online]. Available:
http://www.vanu.com
[56] Huawei. Huawei Industry First with 3G/2G Software Defined Radio (SDR) Single
RAN Product Launch. (Sep. 2008). [Online]. Available: http://www.huawei.com/
news/view.do?id=5587&cid=42
[57] International Telecommunication Union, “Radio Regulations, Edition of 1998,” Tech.
Rep., 1998.
[58] Norwegian Post and Telecommunications Authority. Post- og teletilsynet. (Oct.
2006). [Online]. Available: http:///www.npt.no
[59] I. F. Akyildiz, W.-Y. Lee, M. C. Vuran, and S. Mohanty, “Next generation/dynamic
spectrum access/cognitive radio wireless networks: A survey,” Computer Networks,
vol. 50, no. 13, pp. 2127–2159, May 2006.
[60] Q. Zhao and B. Sadler, “A survey of dynamic spectrum access,” IEEE Signal Processing Magazine, vol. 24, no. 3, pp. 79–89, May 2007.
[61] J. Mitola, “Software radio architecture: A mathematical perspective,” IEEE Journal
on Selected Areas in Communications, vol. 17, no. 4, pp. 514–538, Apr. 1999.
[Online]. Available: ISI:000080068400002
86
References
[62] F. K. Jondral, “Software-defined radio – basics and evolution to cognitive radio,”
EURASIP Journal on Wireless Communications and Networking, vol. 2005, no. 3,
pp. 275–283, Apr. 2005.
[63] A. Shamsizadeh, “A survey of security issues in software defined radio and cognitive
radio,” Isfahan University of Technology, 2009, Graduate student research paper.
[Online]. Available: http://omidi.iut.ac.ir/SDR/2009/88 Ali%20Shamsizadeh SDR
Security%20in%20SDR%20and%20CR/index.htm
[64] J. Giacomoni and D. C. Sicker, “Difficulties in providing certification and assurance
for software defined radios,” in First IEEE International Symposium on New Frontiers
in Dynamic Spectrum Access Networks, 2005. DySPAN 2005., pp. 526–538, Nov.
2005.
[65] W. Lehr, F. Merino, and S. E. Gillett, “Software radio: Implications for wireless
services, industry structure, and public policy,” Massachusetts Institute of Technology
Program on Internet and Telecoms Convergence (http://itc.mit.edu), 2002.
[66] C. R. A. González, C. B. Dietrich, and J. H. Reed, “Understanding the software communications architecture,” IEEE Communications Magazine, vol. 47, no. 9, pp. 50–
57, Sep. 2009.
[67] Y. Zhang, S. Dyer, and N. Bulat, “Strategies and insights into SCA-compliant waveform application development,” in Military Communications Conference, 2006. MILCOM 2006., pp. 1 –7. IEEE, Oct. 2006.
[68] M. Cummings and T. Cooklev, “Tutorial: Software-defined radio technology,” in 25th
International Conference on Computer Design, 2007. ICCD 2007., pp. 103 –104, Oct.
2007.
[69] A. Tribble, “The software defined radio: Fact and fiction,” in 2008 IEEE Radio and
Wireless Symposium, pp. 5 –8, Jan. 2008.
[70] D. Pearson, “SDR (systems defined radio): How do we get there from here?” in
Military Communications Conference, 2001. MILCOM 2001. Communications for
Network-Centric Operations: Creating the Information Force., vol. 1, pp. 571 – 575
vol.1. IEEE, Oct. 2001.
[71] C. Svensson and S. Andersson, “Software defined radio - visions, challenges and
solutions,” in Radio Design in Nanometer Technologies, M. Ismail and D. R.
de Llera González, Eds. Springer, 2006, ch. 3.
[72] A. Rusu and M. Ismail, “Analog-to-digital conversion technologies for software defined radios,” in Radio Design in Nanometer Technologies, M. Ismail and D. R.
de Llera González, Eds. Springer, 2006, ch. 6.
[73] A. Haghighat, “A review on essentials and technical challenges of software defined
radio,” in Military Communications Conference, 2002. MILCOM 2002., vol. 1, pp.
377 – 382. IEEE, Oct. 2002.
87
R EFERENCES
[74] S. Bernier, C. Auger, J. Zapata, H. Latour, and M. Michaud-Rancourt, “SCA advanced features - optimizing boot time, memory usage, and middleware communications,” in SDR ’09 Technical Conference and Product Exposition, Dec. 2009.
[75] P. Balister, M. Robert, and J. Reed, “Impact of the use of CORBA for inter-component
communication in SCA based radio,” in SDR ’06 Technical Conference and Product
Exposition, Nov. 2006.
[76] S. Singh, M. Adrat, S. Couturier, M. Antweiler, M. Phisel, and S. Bernier, “SCA
based implementation of STANAG 4285 in a joint effort under the NATO RTO/IST
panel,” in SDR ’08 Technical Conference and Product Exposition, Oct. 2008.
[77] J. O. Neset, “Software defined radio - evaluation of software communications architecture components using open source SCA implementation - embedded,” NTNU Norwegian University of Science and Technology, Student report, Dec. 2007.
[78] Valgrind Developers. Valgrind. [Online]. Available: http://valgrind.org/
[79] G. Abgrall, F. Le Roy, J.-P. Delahaye, J.-F. Diguet, and G. Gogniat, “A comparative
study of two software defined radio platforms,” in SDR ’08 Technical Conference and
Product Exposition, Oct. 2008.
[80] J. Bertrand, J. W. Cruz, B. Majkrzak, and T. Rossano, “CORBA delays in a softwaredefined radio,” IEEE Communications Magazine, vol. 40, no. 2, pp. 152–155, Feb.
2002.
[81] T. Tsou, P. Balister, and J. Reed, “Latency profiling for SCA software radio,” in SDR
’07 Technical Conference and Product Exposition, Nov. 2007.
[82] P. Navarro, R. Villing, and R. Farrell, “Software defined radio architectures evaluation,” in SDR ’08 Technical Conference and Product Exposition, Oct. 2008.
[83] G. Abgrall, F. Le Roy, J.-P. Diguet, G. Gogniat, and J.-P. Delahaye, “Predictibility
of inter-component latency in a software communications architecture operating environment,” in IEEE International Symposium on Parallel Distributed Processing,
Workshops and Phd Forum (IPDPSW), 2010, pp. 1 –8, Apr. 2010.
[84] R. Haines, “Quantification of spectrum use: Spectrum management tools for the
twenty-first century,” in IEEE International Symposium on Electromagnetic Compatibility, 1990, pp. 390 –396, Aug. 1990.
[85] S. Ghaheri-Niri and P. Leaves, “Traffic control & dynamic spectrum allocation in
DRiVE,” Nov. 2000, presentation slides.
[86] P. Leaves, J. Huschke, and R. Tafazolli, “A summary of dynamic spectrum allocation
results from DRiVE,” in IST Mobile and Wireless Telecommunications, pp. 245–250,
June 2002.
[87] J. Huschke and R. Keller, “Dynamic frequency spectrum re-allocation US 7,436,788
b2,” 2008, United States Patent, (PCT filed 2002).
88
References
[88] M. Buddhikot, P. Kolodzy, S. Miller, K. Ryan, and J. Evans, “DIMSUMnet: New
directions in wireless networking using coordinated dynamic spectrum,” in World of
Wireless Mobile and Multimedia Networks, 2005. WoWMoM 2005., pp. 78–85, June
2005.
[89] V. Brik, E. Rozner, S. Banerjee, and P. Bahl, “DSAP: A protocol for coordinated spectrum access,” in First IEEE International Symposium on New Frontiers in Dynamic
Spectrum Access Networks, 2005. DySPAN 2005., pp. 611–614, Nov. 2005.
[90] O. Ileri, D. Samardzija, and N. Mandayam, “Demand responsive pricing and competitive spectrum allocation via a spectrum server,” in First IEEE International Symposium on New Frontiers in Dynamic Spectrum Access Networks, 2005. DySPAN 2005.,
pp. 194 –202, Nov. 2005.
[91] H. Kim, Y. Lee, and S. Yun, “A dynamic spectrum allocation between network operators with priority-based sharing and negotiation,” in IEEE 16th International Symposium on Personal, Indoor and Mobile Radio Communications, 2005. PIMRC 2005.,
vol. 2, pp. 1004 –1008 Vol. 2, Sep. 2005.
[92] T. Kamakaris, M. M. Buddhikot, and R. Iyer, “A case for coordinated dynamic spectrum access in cellular networks,” in First IEEE International Symposium on New
Frontiers in Dynamic Spectrum Access Networks, 2005. DySPAN 2005., pp. 289–298,
Nov. 2005.
[93] A. P. Subramanian, H. Gupta, S. R. Das, and M. M. Buddhikot, “Fast spectrum allocation in coordinated dynamic spectrum access based cellular networks,” in 2nd IEEE
International Symposium on New Frontiers in Dynamic Spectrum Access Networks,
2007. DySPAN 2007., pp. 320–330, 17-20 April 2007.
[94] A. Subramanian, M. Al-Ayyoub, H. Gupta, S. Das, and M. Buddhikot, “Near-optimal
dynamic spectrum allocation in cellular networks,” in 3rd IEEE Symposium on New
Frontiers in Dynamic Spectrum Access Networks, 2008. DySPAN 2008., pp. 1 –11,
Oct. 2008.
[95] L. Kovács and A. Vidács, “Spatio-temporal spectrum management model for dynamic spectrum access networks,” in TAPAS ’06: Proceedings of the first international workshop on Technology and policy for accessing spectrum, p. 10. New
York,USA: ACM, Aug. 2006.
[96] L. Kovacs, A. Vidacs, and J. Tapolcai, “Spatio-temporal dynamic spectrum allocation
with interference handling,” in IEEE International Conference on Communications,
2007. ICC ’07., pp. 5575 –5580, June 2007.
[97] Roke Manor Research Ltd et al, for Ofcom, “Study into Dynamic Spectrum Access
Summary Report,” Ofcom, Tech. Rep., Mar. 2007.
[98] F. Ge, R. Rangnekar, A. Radhakrishnan, S. Nair, Q. Chen, A. Fayez, Y. Wang, and
C. Bostian, “A cooperative sensing based spectrum broker for dynamic spectrum
access,” in Military Communications Conference, 2009. MILCOM 2009., pp. 1 –7.
IEEE, Oct. 2009.
89
R EFERENCES
[99] C. Tran, R. Lu, A. Ramirez, C. Phillips, and S. Thai, “Dynamic Spectrum Access: Architectures and implications,” in Military Communications Conference, 2008. MILCOM 2008., pp. 1–7. IEEE, Nov. 2008.
[100] M. Nekovee, “Dynamic spectrum access - concepts and future architectures,” BT
Technology Journal, vol. 24, no. 2, pp. 111–116, Apr. 2006. [Online]. Available:
http://dx.doi.org/10.1007/s10550-006-0047-4
[101] ——, “Dynamic spectrum access with cognitive radios: Future architectures and
research challenges,” in 1st International Conference on Cognitive Radio Oriented
Wireless Networks and Communications, 2006. CROWNCOM 2006., pp. 1 –5. IEEE,
June 2006.
[102] B. Chen, H. Huang, Z. Chao, X. Yang, and L. Lei, “Future Architecture
with Dynamic Spectrum Access and Cross-layer Design: An Overview,” 2008.
[Online]. Available: http://www.wireless-world-research.org/fileadmin/sites/default/
files/meetings/Past%20Meetings/2008/WWRF21/Presentations%20and%20Papers/
WG3/papers/WWRF21-WG3-Doc3-FutureArch-DSA.pdf
[103] I. F. Akyildiz, L. Won-Yeol, M. C. Vuran, and M. Shantidev, “A survey on spectrum
management in cognitive radio networks,” IEEE Communications Magazine, vol. 46,
no. 4, pp. 40–48, Apr. 2008.
[104] O. Holland, A. Attar, M. Sooriyabandara, T. Farnham, H. Aghvami, M. Muck,
V. Ivanov, and K. Nolte, “Architectures and protocols for dynamic spectrum sharing
in heterogeneous wireless access networks,” in Heterogeneous Wireless Access
Networks. Springer US, 2009, pp. 1–35, 10.1007/978-0-387-09777-0 3. [Online].
Available: http://dx.doi.org/10.1007/978-0-387-09777-0 3
[105] B. Wang and K. Liu, “Advances in Cognitive Radio Networks: A Survey,” to
appear in IEEE Journal of Selected Topics on Signal Processing. [Online]. Available:
http://terpconnect.umd.edu/∼bebewang/Wang JSTSP 2010.pdf
[106] M. Bennis, J.-P. Kermoal, P. Ojanen, J. Lara, S. Abedi, R. Pintenet,
S. Thilakawardana, and R. Tafazolli, “Advanced spectrum functionalities for future
radio networks,” Wireless Personal Communications, vol. 48, pp. 175–191, Jan.
2009, 10.1007/s11277-007-9423-8. [Online]. Available: http://dx.doi.org/10.1007/
s11277-007-9423-8
[107] G. Salami, O. Durowoju, A. Attar, O. Holland, R. Tafazolli, and H. Aghvami, “A
comparison between the centralized and distributed approaches for spectrum management,” IEEE Communications Surveys & Tutorials, April 2010, available from IEEE
Xplore (early access), to appear in the second issue 2011 of IEEE Communications
Surveys & Tutorials.
[108] S. Haykin, “Cognitive Radio: Brain-Empowered Wireless Communications,” IEEE
Journal on Selected Areas in Communications, vol. 23, no. 2, pp. 201–220, Feb.
2005.
[109] J. R. Barry, D. G. Messerschmitt, and E. A. Lee, Digital Communication: Third
Edition. USA: Springer, 2003.
90
References
[110] W. Yu, W. Rhee, S. Boyd, and J. M. Cioffi, “Iterative water-filling for Gaussian vector
multiple access channels,” in IEEE International Symposium on Information Theory
(ISIT), p. 322, June 2001.
[111] J. L. Holsinger, “Digital communication over fixed time-continuous channels
with memory - with special application to telephone channels,” MIT Research
Laboratory of Electronics, Tech. Rep., Oct. 1964. [Online]. Available: http:
//hdl.handle.net/1721.1/4395
[112] S. Hayashi and Z.-Q. Luo, “Dynamic spectrum management: When is FDMA sumrate optimal?” in IEEE International Conference on Acoustics, Speech and Signal
Processing, 2007. ICASSP 2007., vol. 3, pp. III–609 –III–612, Apr. 2007.
[113] K. Shum, K.-K. Leung, and C. W. Sung, “Convergence of iterative waterfilling algorithm for Gaussian interference channels,” IEEE Journal on Selected Areas in Communications, vol. 25, no. 6, pp. 1091 –1100, Aug. 2007.
[114] R. Zhang, Ying-chang Liang, and S. Cui, “Dynamic resource allocation in cognitive
radio networks,” IEEE Signal Processing Magazine, vol. 27, no. 3, pp. 102 –114, May
2010.
[115] R. Rajbanshi, A. Wyglinski, and G. Minden, “OFDM-Based cognitive radios
for dynamic spectrum access networks,” in Cognitive Wireless Communication
Networks, E. Hossain and V. Bhargava, Eds. Springer US, Oct. 2007, pp.
165–188, 10.1007/978-0-387-68832-9 6. [Online]. Available: http://dx.doi.org/10.
1007/978-0-387-68832-9 6
[116] omniORB. (Feb. 2008). [Online]. Available: http://omniorb.sourceforge.net
[117] Communications Research Canada. SCARI-Open Downloads. (Oct. 2010). [Online]. Available: http://www.crc.gc.ca/en/html/crc/home/research/satcom/rars/sdr/
products/scari open/downloads
[118] VirginiaTech. OSSIE SCA-based open source software defined radio. (Oct. 2010).
[Online]. Available: http://ossie.wireless.vt.edu
[119] OProfile - A System Profiler for Linux. (Feb. 2008). [Online]. Available:
http://oprofile.sourceforge.net
[120] SYSSTAT. (Feb. 2008). [Online]. Available: http://pagesperso-orange.fr/sebastien.
godard/
[121] C. Li, C. Ding, and K. Shen, “Quantifying the cost of context switch,” in ExpCS ’07:
Proceedings of the 2007 workshop on Experimental computer science, June 2007.
[122] Z. Jianfan, D. Levy, and A. Liu, “Evaluating overhead and predictability of a real-time
CORBA system,” in Proceedings of the 37th Annual Hawaii International Conference
on System Sciences - 2004, p. 8, Jan. 2004.
[123] Objective Interface Systems, Inc. ORBexpress RT. (2010). [Online]. Available:
http://www.ois.com/Products/orbexpress-rt-for-c.html
91
R EFERENCES
[124] J. P. Z. Zapata, “SCA ResourceFactory,” Oct. 2010, Email correspondence.
[125] D. Paniscotti and J. Bickle, “SDR signal processing distributive-development approaches,” in 2007 Software Defined Radio Technical Conference and Product Exposition, Nov. 2007.
[126] F. Casalino, G. Middioni, and D. Paniscotti, “Experience report on the use of CORBA
as the sole middleware solution in SCA-based SDR environments,” in SDR’08 Technical Conference and Product Exposition, Oct. 2008.
[127] L. Pucker and J. Holt, “Extending the SCA core framework inside the modem architecture of a software defined radio,” IEEE Communications Magazine, vol. 42, no. 3,
pp. 21–25, Mar. 2004.
[128] T. Ulversøy and T. Maseng, “Coordinated dynamic spectrum access employing peerto-peer architectures,” Jan. 2009, unpublished (4 pages).
[129] C. Jones, “Bad days for software,” IEEE Spectrum, vol. 35, no. 9, pp. 47 –52, Sep.
1998.
[130] T. Schūtt, F. Schintke, and A. Reinefeld, “Range queries on structured
overlay networks,” Computer Communications, vol. 31, no. 2, pp. 280–
291, Feb. 2008. [Online]. Available: http://www.sciencedirect.com/science/article/
B6TYP-4PJCYCY-4/1/9c59d2509d6e7bb6da433e81b92c6c62
[131] P. Gu, J. Wang, and H. Cai, “ASAP: An advertisement-based search algorithm for unstructured peer-to-peer systems,” in International Conference on Parallel Processing,
2007.ICPP 2007., p. 8, Sep. 2007.
[132] D. Keil and D. Goldin, “Indirect interaction in environments for multi-agent systems,”
in Environments for Multi-Agent Systems II. Springer Berlin / Heidelberg, Feb. 2006,
pp. 68–87.
[133] Federal Communications Commission, “Office of Engineering and Technology Invites Proposals from Entities Seeking to be Designated TV Band Device Database
Managers,” Nov. 2009, ET Docket No. 04-186.
[134] T. Hoang, M. Skjegstad, T. Maseng, and T. Ulversøy, “FRP: The frequency resource protocol,” in IEEE International Conference on Communication Systems
(IEEE ICCSS 2010), Nov. 2010.
[135] A. B. MacKenzie and L. A. DaSilva, Game Theory for Wireless Engineers.
Morgan & Claypool, 2006.
USA:
[136] M. van der Schaar and F. Fu, “Spectrum access games and strategic learning in cognitive radio networks for delay-critical applications,” Proceedings of the IEEE, vol. 97,
no. 4, pp. 720–740, Apr. 2009.
[137] A. MacKenzie, J. Reed, P. Athanas, C. Bostian, R. Buehrer, L. DaSilva, S. Ellingson,
Y. Hou, M. Hsiao, J.-M. Park, C. Patterson, S. Raman, and C. da Silva, “Cognitive
radio and networking research at Virginia Tech,” Proceedings of the IEEE, vol. 97,
no. 4, pp. 660 –688, Apr. 2009.
92
References
[138] S. Chen and A. M. Wyglinski, “Efficient spectrum utilization via cross-layer optimization in distributed cognitive radio networks,” Computer Communications,
vol. 32, no. 18, pp. 1931–1943, Dec. 2009.
[139] A. He, K. K. Bae, T. Newman, J. Gaeddert, K. Kim, R. Menon, L. Morales-Tirado,
J. Neel, Y. Zhao, J. Reed, and W. Tranter, “A survey of artificial intelligence for
cognitive radios,” IEEE Transactions on Vehicular Technology, vol. 59, no. 4, pp.
1578 –1592, May 2010.
93
Part II
Included Papers
95
Paper A
Software Defined Radio: Challenges and Opportunities
Tore Ulversøy
IEEE Communications Surveys & Tutorials
Vol. 12, No. 4, 2010
(IEEE online journal)
Will be available online at: http://www.comsoc.org/livepubs/surveys/index.html
Available online at: http://ieeexplore.ieee.org
97
Abstract
Software Defined Radio (SDR) may provide flexible, upgradeable and longer lifetime radio equipment for the military and for civilian wireless communications infrastructure. SDR may also provide more flexible and possibly cheaper multi-standardterminals for end users. It is also important as a convenient base technology for the
future context-sensitive, adaptive and learning radio units referred to as cognitive radios. SDR also poses many challenges, however, some of them causing SDR to evolve
slower than otherwise anticipated. Transceiver development challenges include size,
weight and power issues such as the required computing capacity, but also SW architectural challenges such as waveform application portability. SDR has demanding
implications for regulators, security organizations and business developers.
c
2010
IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained
for all other uses, in any current or future media, including reprinting/republishing this material for
advertising or promotional purposes, creating new collective works, for resale or redistribution to
servers or lists, or reuse of any copyrighted component of this work in other works.
DOI: 10.1109/SURV.2010.032910.00019
99
1. Introduction
1
Introduction
The term ’Software Radio’ was coined by Joseph Mitola III to signal the shift from HW
design dominated radio systems to systems where the major part of the functionality is
defined in software. He described the concept in [1].
Software Defined Radio (SDR) has no single, unified, globally recognized definition.
Slightly different interpretations exist among the actors in the field. Adding to this, a variety
of related terms has been proposed and is used to variable degrees. These include Software
Based Radio [2], Reconfigurable Radio, Flexible Architecture Radio [3]. The main parameter in the various interpretations and the related terms, is how flexibly the radio waveform
can be changed through changing software (SW) and without modifying the SDR Platform
(the combination of hardware and operating environment where the waveform application
is running). Obviously the ideal however unrealizable goal is to be able to communicate
at any desirable frequency, bandwidth, modulation and data rate by simply loading the appropriate SW. Usually SDR is given a more practical interpretation, implying simply that
large parts of the waveform are defined in SW, giving the flexibility to change the waveform
within certain bounds as given by the actual system. The flexibility is commonly assumed
to extend at least to multi-band and multi-modulation. Examples of specific definitions are
those provided by the Software Defined Radio Forum (SDR Forum) [4] and by the Federal
Communications Commission (FCC) [5].
The evolution towards SDR systems has been driven in part by the evolution of the enabling technologies, first and foremost the DA and AD converters and the Digital Signal
Processors (DSPs), but also that of the General Purpose Processors (GPPs) and the Field
Programmable Gate Arrays (FPGAs). A major driving force has also been the demand
for more flexible and reconfigurable radio communication solutions, in particular from the
military sector. This demand has resulted in several major governmental development programmes, e.g. in the US the SpeakEasy I, the SpeakEasy II [6] and the ongoing Joint Tactical
Radio System (JTRS) [7–9] programme. Examples from Europe are the ”Software Radio
Architecture” Plan d’Etude Amont (PEA) [10, 11] in France, the Terminal Radio Software
(TERSO) programme [12] in Spain, the Finnish Software Radio Programme (FSRP) [13],
the Swedish Common Tactical Radio System (GTRS) [14, 15] and the European Secured
Software Defined Radio Referential (ESSOR) [16] programme. The latter is the result of
cooperation between several European countries under the banner of the European Defence
Agency (EDA).
There are many motivations for utilizing SDR solutions. For the military sector, where
communication systems need to have a longer service life time than in the commercial sector, SDR helps to protect investments by prolonging the useful service life of communication systems. This is facilitated through SDR allowing the possibility to change waveforms
and/or load new waveforms on already acquired SDR equipment. It also allows SDR applications (waveforms) that are already invested in to be ported to new and more capable SDR
platforms. SDR furthermore provides the flexible asset suited for the changing environments
of coalition and Network Centric Operations (NCO).
A major motivation within the commercial communications arena, is the rapid evolvement of communications standards, making SW upgrades of base stations a more attractive
solution than the costly replacement of base stations.
Common for both the military and the commercial sector, is that SDR opens up a range
of possibilities by making existing types of radio applications easier to implement, and by
101
A. S OFTWARE D EFINED R ADIO : C HALLENGES AND O PPORTUNITIES
allowing new types of applications. In particular the computing capacity and the flexibility of the SDR may be exploited to develop Cognitive Radios (CR), context-sensitive and
adaptive units that may also learn from their adaptations. As an example, the SDR unit may
adapt to harsh interference and noise conditions by instantly changing parts of the waveform
processing through loading different SW modules, in order to still maintain adequate bit error rates. The cognitive functionality may also be used for improved spectrum utilization,
e.g. through the coexistence of cognitive systems with legacy systems.
SDR is also beneficial for space applications as it provides the flexibility that will allow deployed satellite communication equipment to be SW upgraded according to advances
in algorithms and communication standards. This will allow communication functionality
changes and multiple uses during the lifetime of the satellite [17].
The exploitation of SDR technology in actual products has evolved more slowly than
what was anticipated some years ago. As late as in 2002 [2] it was predicted that by 2006
the adoption of SDR in commercial mobile terminals would have ”widespread adoption
and movement to SDR as baseline design”, something which certainly has not happened.
Also the US government JTRS programme has at various times since its initialization in
1997 ”experienced cost and schedule overruns and performance shortfalls” [7]. However,
lately there have also been several positive signs and accomplishments within SDR, which
indicates that we are getting closer to a larger-scale adoption of SDR in commercial products. Examples are Vanu Inc.’s ’Anywave’ base station approved by the FCC [18], the first
’SCA approved with waivers’ military communications product Thales AN/PRC-148 JEM,
and the first ’SCA approved with no waivers’ Harris Falcon III(TM) AN/PRC-152(C). Also
an increasing number of SDR research and prototyping platforms are being offered on the
market, along with SDR development tools.
This tutorial will review the fundamental challenges that SDR imposes on the various
actors within the field, i.e. developers, security organizations, regulators, business managers,
and users. It will further review part of the important past and ongoing work, when available,
that has contributed to deal with these challenges. During the discussion of each topic there
will be a summary of the remaining open items and/or the projections.
The tutorial will start with SDR SW architecture. As a background to this discussion,
Software Communications Architecture (SCA) is briefly reviewed. Then there is a review
of the challenges and existing/ongoing work within application portability, application development, the underlying middleware platform and alternative architectures.
A fundamental challenge of SDR is to provide the necessary computational capacity to
process the waveform applications, in particular the complex and high data rate waveforms
and especially for units with strict power- and size limitations. The computational requirements and the available computing elements required to handle them will be reviewed.
Further the implications for security, regulations and for the radio manufacturer business
structure will be discussed and the remaining challenges and/or future projections will be
commented on.
SDR poses severe challenges also in analogue RF hardware design and the conversion
between the analogue and digital domains, particularly in wideband implementations. In
order to limit the scope of this tutorial, and as these topics are not unique to SDR alone, these
topics have not been discussed here. A recommended source of information on these topics
is the recent work by Kenington [3]. The aspects of AD conversion are also excellently
treated in [19] which occurs in what is considered a landmark special issue on Software
102
2. SDR and the Software Communications Architecture
Radio [20]. Additional recommended sources on SDR receiver front-end technology are
[17, 21–24].
2
SDR and the Software Communications Architecture
The most widely used software architecture for SDR is the Software Communications Architecture (SCA). SCA is published by the Joint Program Executive Office (JPEO) for JTRS,
with SDR Forum having assisted the JPEO with the development of SCA. SCA is considered a ’de facto’ standard for the military domain, and has been implemented within the
SDR industry, research organizations and at universities.
SCA together with available SCA-based tools allow designers to build component-based
SDR applications, as assemblies of components and logical devices. The components are
SW processing modules with input and output ports, context dependencies in the form of
processing element dependencies, and with settable properties. The logical devices are abstractions for HW modules that in some way process the information stream.
FEC
Encoder
Data Source
Interleaver
Symbol
Mapper
and
Scrambler
Symbol to I/Q
and
Transmission
Filter
F IGURE A.1: An example application component structure, the TX baseband processing of a relatively simple Stanag 4285 waveform. The processed data is sent in packets from the output port of
one component to the input port of the next component.
The component-based approach makes the reuse of parts of applications easier as the
components have clearly defined inputs, outputs and context requirements, and are deployable units. The component-based approach also promotes a separation of roles in the development. Thus, a radio systems engineer could assemble an SDR application based on preprogrammed components without having to be a SW specialist, whereas a signal processing
implementation specialist could concentrate on the processing code of a component.
The SCA is a distributed systems architecture, allowing the various parts of applications to run on different processing elements, i.e. each component is deployed on one of
a set of available processors. The communication between the components, and between
components and devices, is based on using the Common Object Request Broker Architecture (CORBA) middleware. For communication with specialized processors, e.g. DSPs or
FPGAs, SCA advises the use of adapters between CORBA and these units.
System
Component
FC
A
P
I
FS
CORBA
Application
Component
FC AEP
A
P
I
FS
Application
Component
FC AEP
CORBA
FS
A
P
I
System
Component
FC
FS
CORBA
OS
F IGURE A.2: A visualization of the layers of the SCA. ’FC’ is an implementation of the Framework
Control Interfaces, and ’FS’ an implementation of the Framework Services Interfaces. ’AEP’ is the
Application Environment Profile, that limits the applications access to the Operating System (OS).
103
A. S OFTWARE D EFINED R ADIO : C HALLENGES AND O PPORTUNITIES
The SCA defines a protocol and an environment for the application components. SCA
does this by defining a set of interfaces and services, referred to as the Core Framework (CF)
[25], by specifying the information requirements and formats for the Extensible Markup
Language (XML) descriptions for components and applications, termed the ”Domain Profile”, and by specifying underlying middleware and standards. By providing a standard
for instantiation, management, connection of and communication between components, the
SCA contributes to providing portability and reusability of components.
The CF interfaces are grouped in four sets:
• The Base Application Interfaces provide management and control interfaces for the
application components, and they are implemented by these application components.
• The Base Device Interfaces allow management and control of hardware devices
through their software interface, and are implemented by the logical device components.
• The Framework Control Interfaces control the instantiation, management and removal
of software from the system, and these interfaces are implemented by software modules that form part of the system platform.
• The Framework Services Interfaces provide file functions, and are implemented by
software modules that form part of the system platform.
SCA is a general architecture, targeted for but not limited to SDR systems. SCA has similarities with other distributed component architectures, e.g. the CORBA Component Model
(CCM). A conceptual difference relative to CCM is the support for system components or
’devices’.
3
SW Architectural Challenges
Since the SCA is the dominant SDR architecture, the SCA-related challenges will be focused
on first. Then the more general SW architectural challenges and alternatives to SCA will be
discussed. The remaining open issues will be highlighted.
3.1
Portability of SDR SCA-Based Applications
A fundamental challenge of SDR is to provide an ideal platform to application separation,
such that waveform applications can be moved from one SDR platform to be rebuilt on
another one without having to change or rewrite the application. Such waveform portability is highly desirable, particularly in the military sector, for example in order to achieve
interoperability in coalitions by exchanging waveforms.
SCA contributes to such application portability by providing a standard for deployment
and management of SCA-based applications [25]. It also standardizes the interconnection
and intercommunication both between the components of the application, and between components and system devices. Using the SCA Application Environment Profile (AEP), SCA
also standardizes the minimum subset of operating system capabilities that must be available
for the applications, and hence the limited subset that applications may use.
104
3. SW Architectural Challenges
The SCA compliance of an application is not sufficient to cover all aspects of portability.
Significant pieces that are not standardized by the SCA itself are the APIs to the services
and devices of the system platform (see Figure A.2). Since these are linked to the actual
implementation of the system platform, they are supposed to be standardized per system or
domain, as is clearly pointed out in the SCA 2.2.2 specification [25].
Within JTRS, a number of such APIs have been developed [8]. Although previously not
publicly accessible from the SCA website, 14 APIs were made available in April 2007 [26]
and as of February 2009, 18 were available. Presumably this is not a complete set of APIs.
Also for some APIs there may be strategic reasons for not wanting to release them from a
particular domain, examples are the security-related APIs.
In order for portability to extend across domains, the APIs to the services and devices
will need to be standardized across domains as well. With the JTRS APIs now being available, these may be one option for such standardization, particularly for military domains.
There are also several other initiatives in this area, including one from the Object Management Group (OMG) Software Based Communications Domain Task Force [27]. Another
example is the ESSOR project, which aims at giving ”European industry the capability to
develop interoperable SDR” [16]. It remains to be seen if ESSOR will develop standards to
be used by a European military domain only, or whether this initiative could also contribute
to providing inter-domain waveform portability.
An alternative and equivalent approach to that of standardizing APIs to system components is providing abstraction layers between the platform and the application components.
An example is a proposal for a ”Transceiver Facility Platform Independent Model” [28].
Another related portability issue is the various alternatives for transport mechanisms for
the communication with components deployed on DSPs and FPGAs. SCA 2.2.2 prescribes
adapters between CORBA and the DSP and FPGA components as the primary means of
communication with these elements. The JTRS has standardized a specific adapter referred
to as the Modem Hardware Abstraction layer (MHAL) for this purpose [8, 26]. Other similar solutions exist, e.g. Spectrum Signal Processing’s ’QuicComm’ [29]. In recent years,
Object Request Broker (ORB) implementations have also been made available on DSPs and
FPGAs, making CORBA communication possible also to these components [30]. The fact
that various messaging protocols are currently used implies, however, that communication
with DSPs and FPGAs will remain a portability issue until one standard or another has
become the de facto standard.
Furthermore there are some minor portability issues related to differences in ORB implementations [31].
Lastly, portability obviously requires that the component code is interpreted correctly on
the platform. This again has two aspects, language compatibility issues and target processor
functionality compatibility. Since SCA is based on CORBA which has support for several
programming languages, using different code languages will be possible as long as the appropriate compilers and libraries are available. However, different processing elements, in
particular different types of DSPs and FPGAs, support different functionalities and features.
This either requires several component implementations, one for each family of processing
elements, resulting in an overhead of work-hours used. The other approach is to have the
component functionality defined in a high-level language, which is compiled to create a correct code image for the actual processing element to be used. Obviously such a compiler
may become very complicated. The resulting target code or image may also become less
optimal than a target code written specifically for the target processor.
105
A. S OFTWARE D EFINED R ADIO : C HALLENGES AND O PPORTUNITIES
Further information on portability issues may be found in recent publications, including
API standardization [32], lessons learned from porting a soldier radio waveform [33], SCA
aspects in heterogeneous environments [34] and the trade-off between portability and energy
efficiency of the processing [35].
The portability issues with SCA-based applications are summarized in Table A.1. As is
evident from the table, important challenges remain in this area.
TABLE A.1: A Summary of Portability Issues for SCA-Based Applications
Portability aspect:
Environment and protocol for
the installation, instantiation,
control, connection and intercommunication of application components
Defined allowed OperatingSystem access
APIs to system units (devices) and system services
Communication
(message
transport) with specialized
processors (DSPs, FPGAs)
ORB
Programming language
Target processor compatibility of the code
3.2
Standardized
through
SCA?
Yes. (But: SCA Security requirements not public)
Yes, through AEP
No. (SCA states this is to be
handled per domain.)
No. Multiple solutions available. JTRS has standardized
on MHAL.
SCA specifies CORBA.
There are however some
minor differences between
ORB-implementations.
No, but this merely presumes
availability of compilers, libraries etc.
No. Different DSPs and FPGAs may support different
features.
Challenges related to SCA Application Development
For the traditional communications equipment design engineer, with a communications or
radio engineering and less of a SW distributed systems background, SCA may appear challenging to learn and understand.
Even for embedded systems engineers without a CORBA and Object Oriented Programming background, according to [36] ’it could take several months to fully understand SCA’.
SCA tools help abstract away some of the difficulties in SCA. Commercial tools are
available from various sources, e.g. Zeligsoft [37], PrismTech [38] and Communications
Research Canada (CRC) [39], and there is also a University Open Source initiative, the
Ossie Waveform Developer [40] from VirginiaTech. While defining SCA components manually is tedious work involving a lot of XML and CORBA details, the tools allow the SDR
106
3. SW Architectural Challenges
designers to define the components through user-friendly tool interfaces. The tools also allow applications to be formed by making connections between the various components, and
between the components and the devices of the platform. Still, even with the tools being
of significant help in the development process, concluding for example from SCA-based
development efforts within own organization, detailed SCA knowledge is still needed and
in a starting phase a lot of time is spent on non-signal-processing issues, particularly on a
heterogeneous platform.
The tools typically generate the necessary XML and the SCA infrastructure part of the
components [41], while functional processing code needs to be added by the designer, either coded manually or using her/his favourite tools. A more unified higher-level design
approach possibly could improve productivity. An approach where the functional skeleton
code is imported into a Unified Modelling Language (UML) to allow higher abstraction level
modelling of the functional behaviour, is described in [41]. It is envisioned that SDR-design
will increasingly be performed at higher abstraction levels, eventually using fully integrated
Model-Driven Development (MDD) [42] tools with automatic transformation from model
level to any specific heterogeneous platform.
In summary, a further enhancement of the efficiency of designing SCA-based applications, as well as a general availability of MDD tools with fully automated conversion to code
level for any given HW platform are important remaining challenges.
3.3
CORBA Related Challenges
CORBA is demanded by SCA as a middleware platform. The use of CORBA, however, has
known challenges in the form of implications on communication throughput, latency and
latency variation, as well as an overhead of consumed computation and memory resources.
Another issue is that CORBA has lost its popularity in some application domains, which
naturally raises the question of whether an alternative middleware is also needed for SDR.
Throughput is a factor of both CORBA and the underlying transport used by CORBA. In
[43] throughput of CORBA one-way invocations has been measured using a TAO 1.2 ORB,
TCP/IP and 100MBit/sec Ethernet, and compared to using TCP/IP socket programming
directly. The results show that the CORBA throughput is highly dependent on message size.
With a message size of 64 bytes the CORBA TAO throughput was only a few MBit/sec,
whereas with TCP/IP socket programming above 90 Mbit/sec. For message sizes above 8K
bytes the throughput came close to that of using socket programming directly. These results
show that avoiding too small packet sizes in SCA-based applications is important in order
to keep the throughput optimized.
Where the throughput is limited mainly by the underlying transport, other transport
mechanisms than TCP/IP may be used with CORBA. ”CORBA over RapidIO” [44] and
CORBA over a PCI bus [45] are recently described examples.
Latency implies processing delays in the SDR, and tolerable level is application and
waveform dependent. As with throughput, latency is a function of both CORBA and the
underlying transport. Average latency tends to show a linear relation with message size
[44, 46], and with a significant non-zero latency for message size 0. As an example, latency
with CORBA over a RapidIO transport between GPPs [44] is measured at 114 µs for size 32
bytes and 180 µs for 4096 bytes. Latency variation may be reduced [47] by using Real-Time
CORBA features.
107
A. S OFTWARE D EFINED R ADIO : C HALLENGES AND O PPORTUNITIES
Measurements of CORBA computation overhead for a GPP-system with the Ossie CF
[40] and OmniORB have been provided in [48]. Figure A.3 shows an example of how the
computation overhead is increased when an application is split into more and more components, and which in this case is dominated by a CORBA-related overhead [48]. The
application waveform was a Stanag 4285 TX base band waveform running at an increased
symbol rate of 25600 symbols / sec, with frames of 256 symbols being processed at a time.
Stanag
4285 TX
Data
Sink
CPU: 5.5%
Data
Source
FEC
Encoder
Interleaver
Symbol
Mapper &
Scrambler
Symbol to
I/Q & TX
Filter
Data
Source
FEC
Encoder
Interleaver
Symbol
Mapper &
Scrambler
Symbol to
I/Q & TX
Filter
Float to
fixed
converter
Forwarder
Forwarder
Forwarder
Forwarder
Float to
fixed
converter
Data
Sink
Data
Sink
CPU: 11.2%
CPU: 16.6%
F IGURE A.3: Processor workload (user applications + system) in % when running a Stanag 4285 TX
waveform, at an increased symbol rate of 25600 symbols /sec, using the Ossie CF on a GPP, and with
having the waveform application implemented as one SCA component (top), 6 components (middle)
and 6 components with 4 additional (no-functionality) forwarding components (bottom). With the
useful signal processing being identical in all three cases, the processor workload is seen to increase
significantly as the processing is split into more components.
The memory footprint may be significant on a general full-feature ORB [49] but slim
implementations have been made needing less than 100 kBytes of memory [44, 50].
In order to eliminate CORBA latency, throughput and footprint implications, and in
particular when the processing is done on DSPs and FPGAs, data transfers are done on many
current SDR systems on specific high-speed connections with low-level formatting [50, 51]
instead of using CORBA. A downside of this approach is that it makes portability more
difficult, unless the high-speed connections and formatting are standardized. Adapters may
be used for control and data from CORBA-enabled processors.
While CORBA used to be limited to GPP processors only, ORBs for specialized processing elements such as DSPs [50] and FPGAs [30, 44, 50, 52] have been developed in
recent years. The ORB on an FPGA may be put in an embedded microprocessor [52], or (as
a CORBA subset) directly implemented at native gate level [44, 50], where the latter has a
processing speed advantage. This theoretically facilitates CORBA communication between
all typical processing elements on an SDR platform, which is excellent for portability. The
downside of the approach is the amount of resources occupied by these ORBs on the processing elements, and the latency and throughput implications. As for FPGA ones, latency
numbers published in [44] and statements in [50] that an FPGA native level ORB may process a message in ’a few hundred nanoseconds’ indicate that FPGA native level ORBs can
108
3. SW Architectural Challenges
now be made very effective in terms of performance, but further public domain results are
needed on this subject.
CORBA has its popularity in embedded and real-time applications, but has become
less popular in general business-oriented applications. This naturally raises the question of
whether there are other likely candidates to take over from CORBA also in SCA and SDR
applications. In [53], CORBA has been compared to two main competitors in the businessoriented domain, Enterprise Java Beans (EJB) and Web Services. CORBA was found to be
the most mature and better performing technology, 7 times faster than Web Services in a
specific single-client evaluation test, but also by far the most complex one. Web Services
was the worst performer but the simplest one and the one that tackled Internet applications
best. Web Services is popular in this domain, which illustrates that it is not because it is being outcompeted on performance that CORBAs popularity has diminished, but rather due to
other technologies meeting a weighted set of requirements for this type of application better.
CORBA’s standing in this domain is thus not directly transferrable to the SDR domain, as
the SDR domain has more focus on performance issues.
The Data Distribution Service (DDS) has been suggested as an alternative middleware in
an SCA context. DDS is an OMG-managed standard for publish-subscribe communication
for real-time and embedded systems [54]. DDS belongs to the group of Message-Oriented
Middleware (MOM). MOM provides a looser coupling between senders and receivers than
in CORBA, messages are sent to an intermediary layer from which the message is received
by the addressee [55].
In summary, there are exciting ongoing CORBA activities, such as enabling ORBs to
work with fast transports and new ORBs for FPGAs. In the near term, it is anticipated that
this migration of CORBA onto specialized processors and faster transports will continue, but
that low-level non-CORBA data connections will still be used where they are advantageous.
So far there is not a clear path for a middleware that is less complex yet better performing,
to potentially take over the role of CORBA in SDR.
3.4
SCA Challenges and Alternative Architectures
Several technical- and complexity-related SCA challenges have been reviewed in the previous subsections. A further political argument against SCA is that it is also not an open
standard as it is directly managed under the supervision of the JPEO. With these issues as a
background, it is interesting to explore the alternatives.
A closely related alternative architecture specification for SDR, and derived from the
SCA, is OMG’s ”PIM and PSM for Software Radio Components Specification” [56]. Its
current Platform Specific Model (PSM) utilizes CORBA-interfaces [56], but the division in
a Platform Independent Model (PIM) and a PSM makes it easier to substitute CORBA with
some other middleware, if more suitable middleware platforms were to emerge in the future.
OMG’s standards are open ones also in the sense that all members have an equal vote on the
final content of a standard. OMG’s specification is used by the WINTSEC project [57] in
Europe. It has been put forward as a promising candidate for future use in the commercial
domain [58].
Of particular interest for resource-constrained systems is NASA’s ’Space Telecommunication Radio System’ (STRS) [59–61] architecture. Electronic devices used in space require
radiation hardening [59], and processors are hence slower than terrestrial equivalents, which
places further requirements on reduced resource consumption on the application and runtime
109
A. S OFTWARE D EFINED R ADIO : C HALLENGES AND O PPORTUNITIES
environment. STRS has many characteristics in common with SCA, such as the separation of
waveform applications from hardware, but there are also differences. No particular communication mechanism is described, i.e. CORBA is neither mandated nor precluded. Likewise,
an XML parser is not part of the STRS infrastructure, XML files may be pre-processed prior
to deployment [59]. STRS does not have the notion of ports but rather optional source and
sink APIs [60]. The standard is a NASA managed one, but it is influenced through collaboration with OMG and SDR Forum [61]. An open-source implementation, ”Open Space
Radio” [62], has been made available by Virginia Tech.
The GNU radio architecture [63] is an open-source initiative, where the signal processing is carried out on GPP computers. GNU radio is adapted to the Universal Software Radio
Peripheral (USRP) which converts between base band and RF signals. Radio applications
are formed as graphs where the vertices represent signal processing blocks and the edges
are the data flow, and where the blocks have input and output ports. The signal processing blocks are written in C++ and the graph is connected using the Python programming
language. A comparison between GNU radio and a OSSIE SCA-based system has been
recently published [64].
Since part of the features of MOM fit well with SDR’s needs, there could be a potential for MOM-based architectures as future alternatives for SDR, however this has to be
demonstrated.
With the evolution towards cognitive radios which requires the radio to have reasoning
capability and adaptivity there will probably be a need for architectural features beyond
the present SCA. As an example, with cognitive radios it is beneficial to have convenient
framework functions to be able to swap a component from/to a running application in close
to real-time. Also, although the cognitive functionality itself, e.g. adaptation of the waveform application to external conditions, may be implemented as application components, it
may be beneficial to partly support this functionality through middleware. An example of a
middleware-based approach to system self-adaptation is provided in [65].
Concluding on the outlook for SDR architectures, it is expected that the SCA will remain a dominating architecture in the military segment, due to its momentum and the high
importance of portability in this domain. In the commercial civilian segment, where there
is less focus on portability and more on hardware cost and low power consumption, it is expected that a significant portion of designs will use dedicated and proprietary lighter-weight
architectures. In a longer time perspective, with decreasing hardware cost and increasing
performance, it is expected that open and standardized architectures such as the OMG one
will gain wider acceptance in this sector.
4
4.1
Challenges and Opportunities Related to
Computational Requirements of SDR
Computational Requirements
A fundamental challenge with SDR is how to achieve sufficient computational capacity,
in particular for processing wide-band high bit rate waveforms, within acceptable size and
weight factors, within acceptable unit costs, and with acceptable power consumption.
This is particularly challenging for small handheld units, e.g. multi mode terminals. The
power consumption must be below certain limits to keep the battery discharge time within
110
4. Challenges and Opportunities Related to Computational Requirements of SDR
acceptable limits, and with the smallest handheld units it will also be an issue of not causing
the surface temperature of the device to become unpleasantly high for the user.
For base stations like cellular network infrastructure stations, and for vehicular mounted
stations, the power, size and weight factors are easier to accommodate, however performance versus these parameters and cost may still be challenging for complex high bit rate
waveforms.
SDR applications perform processing of the various stages of receive and transmit signals, but they also perform protocol handling, application control activities, user interaction
and more. Conceptually, as an abstraction, we can consider SDR applications to consist of
two main groups of components, (1) Data Processing Components (DPCs) and (2) Event
Driven, Administrative and Control Components (EDACCs). DPCs typically have deterministic behaviour, in the form of processing a package of data according to its defined
algorithm. DPCs typically also have a high degree of inherent parallelism that may be
exploited, an example being a Finite Impulse Response (FIR) filter where a number of additions and multiplications may be carried out in parallel, and the deterministic behaviour also
allows low-level optimization of implementations, see, for example [66]. EDACCs depend
on events, on data content or user interaction, or perform various administrative and control
tasks and are less predictable in their path of execution. Also they typically have far less
inherent parallelism.
The SDR components may to a large degree run in parallel, e.g. a decimator component
may run in parallel with a filter component and a Fast Fourier Transform (FFT) component,
since they work on different stages of the processed data. The Software Communications
Architecture (SCA) facilitates this type of parallel processing as it is a distributed systems
architecture, where processes may run on several processing elements while exchanging
processed data. A good exploitation of this parallelism depends on a well devised component
structure of the waveform application, along with optimized deployment of the components
on the available processing elements.
With complex waveforms, the DPCs will be the components that require the most computing capacity, and the ones that drive the processing requirements of the SDR computing
platform. Table A.2 lists some computational complexity numbers for some key algorithms
of the 802.11a waveform at 24 Mbps, as provided in [67]. The calculated complexity numbers are in the form of gigacycles per second referenced to an Alpha GPP. The complexity
is seen to be overwhelming for such a single GPP, as the required cycle rate is far above
achievable rates.
TABLE A.2: Computational complexity for some key algorithms of 802.11a
802.11a signal processing operation at 24
Mbps
FFT
FIR (Rx)
Viterbi decoder
Gigacycles per second,
Alpha GPP
15.6
6.08
35.0
111
A. S OFTWARE D EFINED R ADIO : C HALLENGES AND O PPORTUNITIES
4.2
Processing Options
There is a large variety of available processing elements, each with their associated strong
and weak points. For the DPCs, processing elements that are able to exploit regular structures, deterministic flows of instructions and internal parallelism will be beneficial from a
performance point of view. In the following some of the most important processing alternatives will be reviewed. The alternatives will be listed according to reconfiguration time,
starting with the least configurable options and proceeding to the real-time configurable,
as the ability of SDRs to be reconfigured or reloaded with new waveform components or
applications is one of its most essential properties. For some SDR applications, it will be
sufficient to be able to reconfigure the unit at a maintenance site, and it will not be a problem
if it takes a few minutes to load a new application. For other applications, reconfiguration
will need to happen while switching from one service, network or waveform standard to
another, e.g. while switching from GSM to WiFi, and the reconfiguration should then typically be done in less than a few tenths of a second. For other applications, e.g. a fully
context-adaptive SDR, reconfiguration will need to be done in real-time without disturbing
any operation of the radio system.
4.2.1
Static Processing Elements and Tailored Functional Arrays
In a non-SDR device, the computationally demanding and often highly parallel parts of an
algorithm would typically be implemented as logical circuitry in an Application-Specific
Integrated Circuit (ASIC). This is often regarded as the optimum solution for computation
efficiency and power consumption, and is typically regarded as a reference solution, which
the reconfigurable solutions may be compared against. ASIC solutions may provide low unit
costs for very high production volumes, but with high development costs.
ASIC implementations are static and as such not usable in an SDR. SDR approaches
that focus on the advantages of ASIC implementations, and on the fact that waveforms tend
to have common modules, have however been suggested. It has been pointed out [68] that
CMOS ASIC devices with more total logic than alternative Field Programmable Gate Arrays
(FPGAs), have significantly less quiescent power and dynamic power consumption. Taking
this into account, it is further suggested [68] it is beneficial for any design to evaluate the
waveforms and determine which functions are common across waveforms, which then could
be hosted in an ASIC to allow a low power implementation.
Related suggestions are also discussed in [69], where the processing platform is constructed of interconnected application-domain tailored functional units.
Along the same line, some commercial signal processors are having coprocessors specifically tailored for specific functions [70].
The functional units may be parameterized to adapt to the specific waveform application. By changing parameters and switching functional units in and out, fast reconfiguration
within the solution space provided by the functional units is available.
It can be argued that such ASIC-hosted modules or tailored functional units will have
a negative impact on portability of waveform applications, limiting portability to platforms
that have the necessary ASIC implementations or functional units. Against this it can be
argued that having alternative software implementations of the same functionality for more
general processing elements, and by allowing the deployment manager to make intelligent
decisions on whether to utilize the ASIC-hosted modules, tailored functional units, or more
112
4. Challenges and Opportunities Related to Computational Requirements of SDR
general processing elements, portability will still be achieved. Still, for part of the SDR
community, SDR platforms with ASIC-hosted processing modules will not be considered
true SDR ones.
4.2.2
Reconfigurable Processing Elements
The Field Programmable Gate Array (FPGA) is the reconfigurable alternative to the ASIC.
At the expense of higher power consumption and circuit area than the corresponding ASIC
solution, an FPGA can be field-programmed with the specific code needed for the specific
waveform application. Reconfiguration times may become as low as fractions of a second
or just some milliseconds, and hence may allow reconfiguration of the SDR unit to connect
to a network via a different waveform standard, for example.
FPGAs have become computationally powerful circuits, and come in many variants. An
additional advantage of the FPGAs is the rich availability of toolsets. Further, the amount of
designs done for FPGAs and the amount of know-how about this type of designs in typical
electronics development organizations, make designs for FPGAs easily planned development tasks with low or moderate risk. Also, compilers are being promoted that make it
possible to generate FPGA code directly from Matlab or Simulink, or directly from c-code.
This type of compilers can be used for applications such as accelerating bottleneck pieces of
c-code running on a GPP or DSP, by converting them to FPGA code. An example is Altera’s
Nios II C2H compiler which is described in [71].
4.2.3
Fast Reconfigurable Units
Configurable Computing Machines (CCM) offer shorter reconfiguration times than FPGAs,
and for some types close-to real-time reconfiguration. CCMs are ’customizable FPGAs with
a coarser granularity in its fundamental composition that is better suited for signal processing
or wireless applications’ [72]. CCMs have application-domain tailored processing units,
connected via a highly flexible and fast reconfigurable fabric.
There is a huge variety of proposed CCMs, both academic initiatives and commercial
products. [72] and [3] provide overviews of different CCMs.
CCMs may seem ideally suited for SDRs that need high performance and fast reconfiguration. A disadvantage, however, is the diversity in approaches, which makes efforts to use
them very much a unique effort for each type. This also reduces the availability of SW tools
for programming them.
4.2.4
Real-Time Reconfigurable Units
Microprocessor systems are processing alternatives that provide full real-time programmability. Year after year, processors have shown remarkable performance increases. Whereas
cycle clock increases have become more difficult due to technological barriers and also less
desirably because of power consumption and power density concerns, the average number
of instructions processed per clock cycle has increased by various means. This is often the
result of having more features operating in parallel within the processor. This has advanced
into multiple-core processors, e.g. multi-core GPPs, multi-core DSPs and Massively Parallel Processor Arrays. Notably, the trend towards parallel processing within microprocessor
technology fits well with the characteristics of DPC SDR components.
113
A. S OFTWARE D EFINED R ADIO : C HALLENGES AND O PPORTUNITIES
While GPPs are designed for good average performance for a wide variety of program
sources, DSPs have features specifically targeted for digital signal processing, e.g. combined
multiply-accumulate operations, and including features to exploit parallelism [73]. There
are DSPs that are optimized for performance, and others that are optimized for low power
consumption, e.g. for battery-driven applications.
Most multi-core processors may be classified as Single Instruction Multiple Data
(SIMD), Multiple Instruction Multiple Data (MIMD), or as a combination of the two.
SIMD units have a single instruction stream, i.e. they execute a single program and each
processor is constrained to executing the same instruction. They operate on multiple data
streams at a time, i.e. each processor may operate on a different set of data. They are very
well suited for algorithms where there is high data parallelism, i.e. a number of identical
operations that can or need to be performed at the same point in time, on multiple data sets.
An example of a unit that has several SIMD processing elements is the Cell processor
[74]. The peak processing power is quoted at an impressive > 256 GFlops [75], however the
corresponding power consumption is not stated, in a chart comparison with other processors
[67] it is located at approximately 50W. Although lower-power versions have been released
[76, 77], these presumably still have a somewhat high power consumption when considering
battery powered applications.
Examples of units that have SIMD processing elements and that are targeted for lowpower consuming units, are the NXP Embedded Vector Processor (EVP) [78, 79], the Sandbridge SB3011 [80] and the Icera Livanto [81]. An example university approach is the
SODA architecture [67].
It is argued that a solution to the performance/power challenge of the fourth generation
communication standards, is an increased number of cores, with each core including a very
wide SIMD processor, hence exploiting the parallelism of the algorithms [82].
MIMD units have multiple instruction streams, and operate on multiple data streams at
a time. This is hence a more general and flexible architecture than SIMD, allowing separate
programs to be executed on each core. This allows problems that exhibit some parallelism,
but where the parallelism does not have a regular structure in the form mentioned for SIMD
above, to be speeded up through parallel execution.
Graphical Processing Units (GPUs) may also be used for accelerating or running signal
processing algorithms [83]. Recent GPUs for the gaming market are powerful computing
units with a number of parallel processors, e.g. the nVidia 8800 GTX has 128 floating
point units running at 1.35GHz, and an observed 330GFlop/sec [83]. As the processors
are targeted specifically at graphics related processing, they have a higher user threshold
for general purpose or signal related processing than a GPP. However, due to the attractive
prices and high computational capacity of GPUs, they may become attractive in low-budgettype PC-based SDR solutions, as accelerators for processing-intensive blocks. GPUs may
also have importance for SDR as high-volume massively parallel computation technology
that may be specifically tailored to SDR applications.
4.2.5
Processing Elements, Concluding Comments
With the wide variety of types of processing elements available, how do they compare and
which ones should be preferred?
The answer depends on the individual weighting of a number of factors, and hence no
easy answer is available. In addition to the processing performance the factors of reconfig114
5. Security Related Challenges
urability time, power consumption, size, weight, cost, suitability for the actual processing
load and probably many more factors need to be taken into account to make proper choices.
Unfortunately there is no unified scale by which the processing capacity of the different
types of processing elements can be judged. For the various processing elements there
are different figures of merit that are promoted, examples being MIPS, MOPS, MMACS,
MFLOPS and so on. An interesting approach is described in [84] where various elements
(ASIC, FPGA, DSP, GPP) are evaluated and comparative charts provided. A specific FFT
is used as a benchmark algorithm and Real-Time Bandwidth (RTBW) as a scale, defined as
the maximum equivalent analogue bandwidth that the unit is able to process with the given
algorithm, without losing any input information. Still these types of comparisons are only
valid for the particular benchmark algorithm. Also they do not indicate the capacity of the
unit for running other algorithms in parallel, e.g. in an FPGA case.
Guidance may also be obtained from commercially available benchmark analysis reports
from Berkeley Design Technology, Inc. (BDTI) [85].
In order to give an indication of what may be achieved with the different types of elements, Table A.3 provides some examples of implementation results from various published
work [79, 80, 86, 87].
4.3
Requirements versus Capacity, the Way Ahead
With the continuous improvement of processing elements, will having adequate processing
power in handheld SDR terminals soon be a non-issue? To make some projections, and
inspired by [88], the data rate evolution of mobile cellular systems [89] has been plotted
against the DSP performance evolution of Texas Instruments (TI) DSPs [70, 90], see Figure
A.4. Estimates of the single channel processing requirements of 2G/3G mobile systems [2],
and for a particular 4G case (conditions in [91]) are also plotted.
For this example, the throughput download data rate in mobile cellular terminals increases at a higher exponential rate than the exponential rate of the DSP processing capacity
in MIPS. The required processing rate increases at an even higher pace, this being due to the
algorithmic complexity increasing with the generations.
While the progress of processing element capacity continuously makes it easier to meet
the capacity requirements of today’s existing waveforms, the rapid system evolution particularly in the civilian mobile communications sector indicates that providing adequate processing power at target power consumption will remain a challenge in the years to come, and
there will be an increasing need for data processing elements that further exploit parallelism.
5
Security Related Challenges
The flexibility benefits of SDR at the same time causes challenges in the security area, both
for developers and security certification organizations. In the following the most important
of these security related challenges will be reviewed along with important research contributions in these areas, and a summary of the remaining difficulties.
115
A. S OFTWARE D EFINED R ADIO : C HALLENGES AND O PPORTUNITIES
TABLE A.3: Waveform Implementation Examples for Different Processing Elements (references,
see text)
5.1
Proc.
element
class
Source
type
FPGAs
and
CPUs
2x XiLinx
XC2V4000,
6000, 8000
2x CPUs (unspecified) 430
MIPS
and
SIMD
Sandbridge
Sandblaster
SB3011
SIMD
NXP EVP
CCM
Stallion
(VirginiaTech)
32-bit
VLIW
DSP
TI C6201
Implementation(s)
and result(s)
Pub.
year
802.11a
W-CDMA
2004
WiMAX,
2.9Mbps:
80% utilization
802.11b 11Mbps: 55%
utilization
W-CDMA 2.4Mbps:
87% utilization
@0.5W
Estimated approx 45%
utilization for HSDPA
@ few hundred milliwatts
A benchmark suite of
W-CDMA algorithms
Computations/sec:
7118 @0.7W
A benchmark suite of
WCDMA algorithms
Computations/second:
10293 @1.94W
2007
2005
2004
2004
Software Load and Protection against Unauthorized SW
A major security challenge is introduced through the possibility to load and install new SW
on an SDR unit [92], possibly also over-the-air [23, 92–94] or via a fixed network [94]
connection, and the consequent threat of having unauthorized and potentially malicious SW
installed on the platform. This problem domain is very similar to that of maintaining SW
installations on personal computers, and avoiding unintended or malicious functions to be
installed. With SDRs, the consequences of unauthorized code can be even more far-reaching,
from compromising threats to the user’s assets, e.g. his confidential items, via threats to the
communication ability of the equipment, to threats to other users and networks, e.g. by the
SDR jamming other radio activity [95]. In the USA, SDRs are required by the FCC to have
the means to avoid unauthorized SW [96], the specifics of these means are however left to
the manufacturer.
If the SW is downloaded over the air, this also exposes the system for someone illegally obtaining the SW (privacy violation) or altering the SW while in transport (integrity
violation).
116
5. Security Related Challenges
Data Rate and MIPS Requirements for Mobile Cellular Systems vs DSP
MIPS Evolution
10000000
4G
1000000
kpbs and MIPS
100000
Data Rate (kbps)
TI DSP (MIPS)
Comp. req. (MIPS)
Exp. Trend DSP
10000
3G
1000
100
10
1
1980
2G
1985
1990
1995
2000
2005
2010
2015
Year
F IGURE A.4: Data rate in kbps of the download channel of mobile cellular systems, and approximate
MIPS requirements, plotted against the performance evolution of TI DSPs.
Several publications describe the preventing of unauthorized code by using Digital Signatures [97–104]. The manufacturer (or any other party authorizing the code) computes a
one-way hash of the code module, then encrypts this hash code using their private key of
a private-public asymmetric key pair. This encrypted hash is the digital signature which is
added to the code module before it is sent to the SDR platform. A verification application
on the SDR platform then verifies the signature by decrypting the signature using the manufacturer’s public key, and checks that the decrypted signature equals the one-way hash of
the code module. A Digital Certificate is a way of assuring that a public key is actually from
the correct source. The Digital Certificate is digitally signed by a trusted third-party. The
trusted third-party is verified through a chain of trust to a root certificate on the platform.
A critical issue with the above approach is that root certificates must be distributed to
all terminals through a secure out-of-band channel. With a new platform this is easy as root
certificates may be factory installed or installed through physically delivered SW, however
the issue becomes important when a certificate has expired when the terminal is in the field.
A further issue is that of revocation of certificates. A certificate can be revoked at any point
in time, and in order for the terminal to know if this is the case or not, it needs to check
against a revocation registry, for each and every download operation. It is well known from
general computing that this pattern of action is not always obeyed.
Another published way of providing code authorization relies on the sharing of a secret
between the SDR platform and the manufacturer. The manufacturer may then make a oneway hash of the SW code and the shared secret and send this hash to the SDR platform
together with the SW code [102]. The SDR platform may then verify that the code is from
the manufacturer by doing the same one-way hash and compare. A negative implication of
this approach is that if the secret is a unique key for each SDR platform, there will be a high
number of keys to administrate for the manufacturer. On the other hand, if it is a single key
with a wide distribution, this makes it more susceptible to be compromised.
Trusted Computing (TC) functionality [95] is also an optional way to address threats
against an SDR platform and against downloaded software. A trusted platform subsystem
117
A. S OFTWARE D EFINED R ADIO : C HALLENGES AND O PPORTUNITIES
has a ’trusted component’ integrated into the platform, which is immutable, i.e. the replacement or modification is under the control of the platform manufacturer. The trusted part may
be used for integrity measurements of a program, and for creating certified asymmetric key
pairs for the software downloading [95]. TC is further commented in Section 5.2.
A suggested further barrier against potentially malicious code is the pre-running of the
new SDR component in a ”sandbox” [99, 104], a sheltered environment where it can be evaluated without posing threats to the actual system. Ordinary personal computer protection
means as virus protection [92, 103] and memory surveillance [103] may be further barriers,
as may also radio emission monitoring [99]. The efficiency of the sandbox pre-running may
be debated, as there is no guarantee that the malicious code will expose its behaviour in this
test.
The authorization schemes described above also provide integrity protection of the code
while in transit. Privacy protection, i.e. protecting the code in transit from being disclosed
to a third party, may be achieved through encrypting [98, 105, 106] the code and including
the digital signature.
An SDR ideally should have exchangeable cryptographic algorithms too. A motivation
for exchangeability of cryptographic components is that even if a current security evaluation
does not reveal any weaknesses of some cryptographic approach, cryptanalysis techniques
developed later may render it insecure [98, 103, 105]. The cryptographic components are
in [98, 103, 105] viewed as a matrix with columns for hash algorithm, digital signature primitive, crypto cipher, secret key and public key, and with rows for the entries for alternative
cryptographic components. It is assumed that there is a minimum of two alternatives for
each of the crypto components, such that even if there is one that is compromised, there is
one that is secure that can be used in the downloading process. Any weak cryptographic
component, e.g. a crypto algorithm, is downloaded using trusted crypto components from
the matrix, and in an automatic manner.
A solution utilizing Altera Stratix III FPGAs has been described [107]. The FPGA
configuration bit stream is transferred to the FPGA in Advanced Encryption Standard (AES)
encrypted form, with the FPGA containing a crypto key and a decryption module that allows
it to decrypt the configuration bitstream when it is loaded into the circuit. Once inside the
circuit, the configuration file cannot be read back [107].
In summary, many research contributions have been made in the area of software download, providing a menu of technological options that can be used. Still, there is increased
potential for security threats to SDR systems versus non-reconfigurable ones. With the anticipated creativity of attackers, the specific solutions for the specific SDR systems, and ensuring these resist potential threats, will remain a challenging area for both developers and
security organizations. Any allowed downloading of security components will be particularly challenging. Also, the question of who will authorize the SW remains. Should it be the
hardware manufacturer, the SW company, a third-party certifier, a government institution,
or all the above mentioned. This remains an issue that needs to be further matured.
5.2
Trusted and High-Assurance Systems
Many communication systems, in particular military ones, have high-assurance security requirements. Demonstrating such high-assurance security on a fully flexible and general
computing platform is a very difficult task. This contrasts that the fully flexible platform,
118
5. Security Related Challenges
where all the functionality is defined in SW applications only, is the ideal computing platform for an SDR in terms of portability.
The high-assurance SDR system will have certain assets, like crypto keys, the user’s
plain text messages, his/her personal information and more, that need to be protected e.g.
for confidentiality and integrity. Hence practical strong security solutions typically employ
combinations of hardware solutions and software solutions [103]. Examples of modules that
have impact on the hardware structure are the protected storage for crypto keys, the protected
storage for the crypto-algorithm, and the separation between black (encrypted) data and red
(plain-text) data (Figure A.5). Such architectures are typically custom and dedicated.
Black data
Black side application
A
P
I
Crypto
A
P
I
Red side application
Red data
Control
Control
Control
A
P
I
Security Control
A
P
I
F IGURE A.5: An illustration of a red (plain-text) to black separation barrier, and the data and control
interfaces to the security related modules.
As mentioned, TC, standardized by the Trusted Computing Group [108], incorporates
dedicated immutable hardware elements. The immutable elements are in this case the Root
of Trust for Measurement (RTM), Root of Trust for Storage (RTS) and the Root of Trust for
Reporting (RTR) [109]. These enable measurements of SW on the platform to be made so
that changes can be detected, confidentiality-protection of data, and allow an outside challenger to assess whether the platform is trustworthy. TC also facilitates isolation between
different SW components on the platform.
While being developed originally for general computing platforms and PCs, the benefits of TC on SDR platforms have been pointed to [95], and there is also a TCG Mobile
Phone Work group in existence [108]. For SDR, TC can provide authenticated download,
ensure access to SW only by the intended recipient, detection of malicious or accidental
modification or removal, verification of the state that the platform boots into, and isolation
of security-critical software [95]. TC functionality does not ensure the integrity of securitycritical software while in storage, and does not prevent denial-of-service attacks in the form
of deletion of the downloaded SDR software [95]. While TC has many beneficial properties
for use in SDR, it does not cover every relevant aspect of security for SDR, and hence in
itself is not sufficient for a high-assurance SDR system.
Modern military information and communication systems often need to handle information at different security classification levels simultaneously. With conventional security
architectures, this requires multiple sets of security HW and processors, which implies both
high cost and high power consumption. Examples [110] have also shown that with the conventional security architectures it is demanding to certify the SW security core to the highest
assurance levels, as this part of the SW and hence the certification task often grows too large
to handle at the highest certification levels.
119
A. S OFTWARE D EFINED R ADIO : C HALLENGES AND O PPORTUNITIES
The Multiple Independent Levels of Security (MILS) architecture offers solutions to
these issues. According to [111], the earliest references to MILS are NSA internal papers
by Vanfleet and others from 1996. At the basis of the architecture is a Separation Kernel
(SK), as outlined by John Rushby in 1981 [112]. The SK allows several independent partitions, e.g. partitions handling different security levels (Figure A.6), to run on the same
microprocessor, through separating them in space and time [110]. The SK provides data
separation, in ensuring that the different partitions operate on physically separated memory
areas. It also provides authorized communication channels between the partitions. Further
it provides for sanitization, i.e. the cleaning of shared resources (e.g. registers) before a new
partition can use them. Finally it provides damage limitation, in that an error in one partition is not able to affect the processes in the other partitions. The SK schedules processing
resources to each partition in such a way that the partition will always be able to process its
tasks, independently of any behaviour in the other partitions. The SK is the only piece of
SW in the architecture that is allowed to run in supervisor mode, all the partitions run in user
mode, which means under no circumstance can they alter the SK.
Application
Application
Application
Secret
Confidential
Restricted
Middleware
Middleware
Middleware
PCS
Separation Kernel (SK)
F IGURE A.6: The basic building blocks of the MILS architecture.
Since the SK only includes the limited functionality as described above, it can be made
fairly small, about 4K lines of code has been quoted [113]. This makes it manageable to
certify the SK to the highest Evaluation Assurance Levels (EAL) levels in the Common
Criteria [114], EAL 6 and EAL 7. Green Hills Inc recently announced the completion of
the certification of their Integrity-178B SK-based OS as the worlds first certified at EAL6+
[115]. It should be noted also that real-time OSes using SK principles have been used for
some years already, probably with some more functionality than the minimum-size SK, in
the aviation industry, in other embedded applications and in SDR applications.
The SK requires a certain amount of support from specific HW [116], the most important function being the Memory Management Unit (MMU), needed for physical memory
separation. A remarkable advantage with the architecture is that the specific HW support
needed is already available in many commercial microprocessors [116].
120
5. Security Related Challenges
The individual partitions include application code and middleware. Middleware in MILS
has a wider interpretation than the conventional one, it includes both traditional communication oriented middleware such as CORBA, and may also include more OS-related functions [110]. Some partitions may be Single-Level-Secure (handles data at one security level)
while others may be Multi-Level Secure (for example a module that downgrades information
from one security level to a lower one, through filtering or encryption) [116].
Application and middleware in a partition are to be security evaluated at the appropriate
level for that partition, and without needing to take into account the other partitions in the
system. This separates the problem of evaluation and certification, and assures that no part
of the system needs to be certified at a higher level than what is needed for the security level
of the information it is to handle.
Obviously the control of the information flow between the partitions is an important
part of the security in a MILS system. The allowed information flow may be planned as
a directed graph [110], specifying which partitions are allowed to exchange information in
which direction. Within a single processor the information flow is moderated by the SK. In
a distributed system with multiple processors involved, the information flow moderation is
facilitated through the Partitioning Communication System [113] (PCS).
A vision for MILS, and one that potentially could significantly reduce the amount of
time spent on evaluations, is that of a ”compositional approach to assurance, evaluation
and certification” [117], enabling security evaluation of a system to be based on previous
evaluations of the components that it is composed of.
The disadvantages with the MILS architecture include the higher memory consumption
due to the partition allocated memory areas, the less dynamic processing performance exploitation due to the need to guarantee processing resources to all partitions, and the higher
cost of context switches due to the increased number of separate processes and due to the
sanitization operations needed. With the advances in processors and memory devices, these
disadvantages have become less important over time.
MILS is a very promising security architecture for SDR, offering a security-domain flexibility that goes hand-in-hand with the waveform application flexibility desired for SDR.
MILS for SDR is still a work-in-progress, for example in terms of certifications of components, and it will be very interesting to follow the advances in this area.
In summary, it is challenging for an SDR system to obtain the best possible compromise
between high-assurance security and having a computing platform that is as flexible and
general as possible. The MILS architecture is very promising in this context, and additionally offers cost-effective handling of multiple security levels and a compositional approach
to certification. The further availability of certified MILS components will strengthen its
position.
5.3
Portability of Security Related Modules
As a way to achieve interoperability between secure radio systems, for example in military
coalition operations, portability of security SW modules is a highly desired feature.
Code portability between platforms with conventional security architectures requires that
the APIs to the security related devices and services are standardized. In many cases each
user domain (e.g. a nation) will be reluctant to disclose security features and APIs, due to the
fear that the information may be useful for organizations that wish to develop threats against
the type of SDR platform. The issue of whether security features should be disclosed or kept
121
A. S OFTWARE D EFINED R ADIO : C HALLENGES AND O PPORTUNITIES
secret (”security by obscurity”) is however a debated one. FCC stated in a final rule [118]
that ”manufacturers should not intentionally make the distinctive elements that implement
that manufacturer’s particular security measures in a software defined radio public...”. SDR
Forum in their response [119] pointed to that ”History repeatedly has shown that ”security
through obscurity” often fails, typically because it precludes a broad and rigorous review
that would uncover its flaws”.
A possible way ahead is to design the security features and the security APIs in such a
way that making the security APIs public does not increase the vulnerability of the platform.
Another potential solution is having dual security APIs, an intra-domain API and another
inter-domain API, where only the inter-domain API is disclosed outside of its own domain.
With the current (2.2.2) [25] version of the SCA, neither the security requirements nor
the security APIs have been openly published, making portability between different development domains difficult. The ESSOR project aims at providing what it terms a ’common
security basis to increase interoperability between European forces as well as with the United
States’ [16]. It remains to be seen though, if ESSOR will define the needed security parts
for the whole of the European domain or which part, and whether this will contribute in any
way to portability with US platforms.
MILS-type architectures are the most promising developments for providing drastic reductions in technical obstacles for portability of security code. Since the MILS-specific HW
requirements are already present in many commercial microprocessors, this enables different platforms to provide compatible environments for MILS-type security code. It should be
noted though that in the case of implementation-specific additional bindings to non-standard
devices this would give similar concerns as discussed earlier.
In summary, the lack of interdomain security APIs and security feature documentation
is presently a major challenge and obstacle for SDR application portability. Ongoing initiatives, e.g. ESSOR, are likely to improve this situation by providing complementing standards. MILS-type security architectures have the potential of greatly reducing technical
obstacles for portability of security code, such that the dominant issue will be that of trust
between organizations and the willingness to share crypto algorithms or having available
coalition algorithms (such as the ”Suite B” initiative [120]) and security related code. Thus
MILS potentially forms an important part of the solution for exchanging secure operational
waveforms between nations and thereby achieving multination interoperability in the battlefield.
6
Regulatory and Certification Issues
In the following, certification challenges with SDR equipment are reviewed. The remaining
issues are pointed out.
6.1
SDR Certification
Traditionally, radio equipment has been approved with the specific frequencies, bandwidths,
modulations and with specific, and fixed, versions of functionality. This certification regime
is challenged when the future application waveforms of the equipment are not known at the
time of shipment, and it must be expected that a specific radio platform is updated with
122
6. Regulatory and Certification Issues
new SW versions several times during its lifetime, or even, in future systems, reconfigured
dynamically according to communications needs.
6.1.1
SDR Certification in the USA
In the USA, significant steps have already been taken in changing certification rules to accommodate SDR equipment. The FCC in the USA adopted rule changes on 13 September
2001 [96] that defined SDRs as a new class of equipment. With the previous rules any
changes to output power, frequency or type of modulation implied that a new application
form and a new approval would be needed and the equipment be re-labelled with a new identification number. With the changed rules, updates to the software that affected the output
power, frequency or type of modulation could be handled in a more streamlined approval
process referred to as a ”Class III permissive change”, provided the equipment originally
had been approved as an SDR. An important requirement introduced was that ”manufacturers must take steps to prevent unauthorized software changes”. The concept of electronic
labelling of equipment was also introduced.
The certification rules were further updated on 10 March 2005 [5]. Under these rules,
radio equipment that has SW that affects the RF operating parameters, and where this SW
is designed to or expected to be modified by a party other than the manufacturer, is required
to be certified as an SDR. One of the reasons for this change was a fear that third-party SW
modifiable radio equipment could otherwise be declared non-SDR, and hence would not be
required to have protection against unauthorized software.
These updated rules also define SDR radios as where ”the circumstances under which
the transmitter operates in accordance with Commission rules, can be altered by making a
change in software”, which points to the conditional use of spectrum, modulation or output
power, as given in FCC regulations.
Certification is required to be carried out at FCC labs, no self-certification or certification
by Telecommunications Certification Bodies (TCBs) is allowed.
Further information on SDR certification may be found in [121]. A related standardization effort is IEEE 1900.3 [122].
6.1.2
SDR Certification in Europe
In Europe, steps have also been taken that allow faster certification of reconfigured radio
equipment. Whereas previous processes demanded independent type approval processes in
test houses, the Radio and Telecommunications Terminal Equipment Directive (R&TTE)
that came into force in April 2000 in Europe allows self-certifications for telecommunications equipment, yielding faster update cycles when reconfiguration of equipment is
needed [123]. Work on making specific adaptations to the R&TTE directive to accommodate SDR products has been carried out by the TCAM Group on SDR (TGS), where TCAM
is the Telecommunications Conformity Assessment and Market Surveillance Committee of
the European Union. A final report was presented from TGS in 2004. Based on this and
further discussion in TCAM, the European Commission has drawn current preliminary conclusions [124]. Further conclusions have been expected from TCAM, but as of November
2008, none have been drawn up. It is expected that this process will continue. It has been
suggested that a specific harmonized standard for SDR in Europe is to be developed [124].
123
A. S OFTWARE D EFINED R ADIO : C HALLENGES AND O PPORTUNITIES
The TGS report identifies ’responsibility for the product’ as a key issue, such as when
third-party SW is installed on the equipment. The need for a more flexible marking, e.g.
digital, is concluded. The need for safeguarding the equipment against unauthorized SW is
a discussion point, but currently, unlike in the USA, the manufacturer is not responsible for
unauthorized code installation [124].
Further perspectives on regulatory aspects in Europe may be found in [123–125].
6.1.3
Remaining Issues and Projected Evolution of SDR Regulatory Certification
It is clear from the above that the regulatory certification aspects of SDR need to be further
matured, both in the USA and in Europe.
The FCC lab certification approach in the USA is likely to become an overwhelming
task as the number of products increase and with the product complexity and the amount of
functionality in an SDR. Even the Class III procedure for SW updates is likely to saturate
with future highly reconfigurable equipment with a vast amount of possible SW combinations. Self-certification in the form of manufacturer Declaration of Conformity, combined
with process and organizational certification of the manufacturers to ensure their capability
of self-certification, will both allow the tasks to be manageable and give a shorter time to
market.
In Europe, more final conclusions on SDR certification need to be drawn and standards
updated. The concept of a single party being responsible for the equipment is likely to
become increasingly difficult and will need reconsideration.
In both the USA and in Europe, since future dynamic reconfigurable equipment is likely
to have a very large number of possible software application combinations, it is unlikely
that each and every possible combination of software components can be tested on each
and every platform type. An alternative way is to establish trust by testing the components
themselves.
A way of creating a further barrier against non-conformity with applicable regulations
is to include in each product a mandatory regulation policy enforcement SW module that
defines the opportunity window of frequencies, modulations and output power. Additionally,
a policy monitor component may be included, that monitors the spectrum from the SDR in
a periodic manner and issues alarms or closes down transmission when a policy breach is
detected (see Figure A.7).
6.2
SCA Compliance and Domain Certification
In market domains where the SCA specification is used, certification of compliance to the
SCA specification is likely to be a market demand, and important for application portability.
The same applies to other features that are particular to the domain, for example domain
APIs. It is a challenge to establish such certification also outside the JTRS.
JPEO states that it is the Certification Authority for the SCA, and that it will assign one
or more test organizations as the Test and Evaluation Authority [25]. An overview of the
model for certification of JTRS products is provided in [126].
There is a need for architectural certification authorities also in other domains than the
JTRS domain, and in other parts of the world than the USA. For example, since it is likely
that there will be some differences between European standards and the US one, Europe
124
7. Opportunities Related to Business Models and Military and Commercial Markets
Data input
Waveform
application
Policy input
Policy
enforcement
module
RF
Policy
monitor
F IGURE A.7: A way of creating a further barrier against non-conformance to regulations: A policy
enforcement module communicates to the waveform application which regulatory policies apply at
the present location and time, e.g. which frequency range and radiated power is allowed to use. A
policy monitor periodically checks the actual generated waveform for non-conformance, in which
case it will instruct the Policy enforcement module to shut down the transmitter.
will need its own certification facilities. Platform and waveform certification needs as defined in the WINTSEC project in Europe are discussed in [57]. Referring to the view of
the Finnish Software Radio Programme, a ’European certification network must be operational about 2013-2014’ [127]. How this will happen and which organizations will have the
responsibility are open issues.
7
Opportunities Related to Business Models and Military
and Commercial Markets
SDR provides new product and market opportunities, and has the potential of changing the
business models in the radio communication industry. Here, these opportunities are reviewed
along with the present status in pursuing these in the military and commercial domains, and
with references to recent publications on this subject. Lastly, projections for the further
development in this area are provided.
7.1
7.1.1
Opportunities in the Military Domain
SW Upgradeable and Reconfigurable Military Radio Communications
Equipment
SDR provides opportunities for having military radio communications equipment which is
SW upgradeable and reconfigurable, possibly even field reconfigurable and reconfigurable
in space deployment [128]. This represents both benefits for users and a considerable market
opportunity for manufacturers.
Since the military domain is characterized by long-lifetime acquisitions, while missions
and technical requirements vary at a faster scale, SW upgradability and reconfigurability
is very much in demand. Additionally, the possibility of contracting the SW updates from
third-party providers may provide more competition and contribute to reduced lifetime-costs
for the military.
125
A. S OFTWARE D EFINED R ADIO : C HALLENGES AND O PPORTUNITIES
The military SDR domain in the USA is dominated by the JTRS programme [7–9, 129].
JTRS has great focus on standardization and portability, with the JPEO managing the SCA.
The first production contracts for JTRS ’interim’ products were awarded in June 2007 [130],
these are variants that meet some basic JTRS requirements [126] but not all requirements as
laid out in [126]. According to [131], low-rate production start of ’Handheld, Manpack and
Small Form Fit’ and ’Ground Mobile’ radios are scheduled for 2010 and 2011 respectively.
JTRS has evolved to also be a programme that delivers tactical wireless networking [9]
for the US military and thus is an important part of the NCO transformation.
In recent reports [7, 132] the United States Government Accountability Office (GAO)
raises its concern about JTRS, pointing to the risks due to the technology challenges [7, 132]
and the cost of each unit being significantly higher (up to 10x) than the legacy units they
replace [132].
In the military domain in Europe, as in the USA, there is support for and focus on
the SCA. The military domain is dominated by several national [12, 13, 15] projects and
demonstration platform developments and cooperative [16, 133, 134] projects. Sweden has
received its first vehicular mount SCA-based GTRS units [14]. Apart from this it is expected
that major development efforts for volume products will await the architectural outcomes of
the ESSOR [134] project.
A discussion on SDR processing in satellites is provided in [128].
In summary, the opportunities provided by SDR in the military domain are starting to be
exploited in the form of deployed SDR units. Some interim radio types meeting basic JTRS
requirements have been contracted and production and deliveries have started. In Europe,
several projects are ongoing. Still, taking into consideration the challenges discussed elsewhere in this paper and the concerns raised by GAO, the pace where fixed military radios
are replaced by SCA-certified high-flexibility SDR ones is expected to be slow in the next
few years.
7.1.2
Waveform Library
SDR also provides a possibility for building up libraries of waveform applications. In this
way, SDR platforms may be loaded with the specific applications needed in the scenarios
and operations they are to be deployed in. Libraries can be national ones or coalitional ones.
Library waveforms represent market opportunities for SW companies, as well as units that
will be tradable between organizations.
JTRS is building up a repository of waveform applications for porting onto the various
platforms. In 2006 it was reported that JTRS code had accumulated to 3.5 million lines with
Government Purpose Rights [8].
In Europe the political and business issues of building up a waveform inventory are
difficult, as manufacturers in some cases are the owners of the waveforms and as there are
also many national interests. NATO’s Industry Advisory Group (NIAG) has investigated
”the dynamics and Business Models behind Industrial Contribution of Waveform Standards
and how these may and could change with the advent of SDR technology” [133]. NIAG has
issued its report but it is unfortunately not publicly available.
Availability of advanced communication waveforms for exchange of video and data in
coalitions is a critical issue, which is hampered both by the above-mentioned political and
business issues and due to the JTRS ones being classified as ”US ONLY” [131]. A way of
getting round such political and business issues for coalition waveforms is to develop new
126
7. Opportunities Related to Business Models and Military and Commercial Markets
waveforms through collaborative contributions from nations. COALWNW is an example of
such a multinational cooperative effort [131].
Due to the reasons presented and discussed in Section III, porting efforts are likely to be
required for putting library software onto a specific platform.
It is projected that the trend towards building up libraries of waveform applications for
national and coalitional use will remain an active one, and that this will be a growing market
opportunity for third-party SW companies.
7.1.3
Military Cognitive Radio
SDR, as a base implementation technology, provides opportunities for providing the future
military versions of Cognitive Radio (CR).
Cognitive radio in the military domain is a highly active research field, that generates
considerable interest both as a means for reconfiguring waveforms according to sensed electromagnetic conditions, and as a means to provide increased spectrum utilization through
dynamic use of spectrum [135]. It is also foreseen that as more and more radios in battlefield environments will have cognition, and as the cognitive abilities are likely to be used for
both offensive and defensive purposes, cognitive abilities in military radio equipment will
become mandatory. CR will thus be another major driver for the transition to SDR in the
military domain.
The literature on CR and potential use of SDR for CR purposes is overwhelming, a few
recommended sources of information are [135–140].
It is expected that the replacement of military radio systems with smarter CR ones will
represent a continued SDR opportunity for many years forward.
7.2
7.2.1
Opportunities in the Commercial Domain
Multiprotocol Multiband Base Stations
SDR provides an opportunity to switch from conventionally designed cellular base stations
to Software Defined Multi-Protocol Multi-Band (MPMB) base stations [58].
The reconfiguration possibilities provided by SDR MPMBs accommodate future cellular
base station needs, for example:
• the possibility to dynamically add services
• the rapid introduction of new communications standards [141]
• the trend that new communications standards are put into service in a less mature
state than previous standards, implying an increased risk of post-deployment changes
needed [141]
• context-related reconfigurability and the accommodation of the future cognitive terminals [125, 142]
SDR MPMBs also allow standardization of hardware platforms, which reduces the
amount of capital tied up in hardware inventory. Since the total lifetime cost of the system
is more important than the initial cost, the SDR solution may be preferred even if the initial
127
A. S OFTWARE D EFINED R ADIO : C HALLENGES AND O PPORTUNITIES
cost of the SDR platform is higher. Also with base stations, increased power consumption
over a conventional design can be tolerated.
At present, cellular base stations are dominated by the traditional non-SDR ones. The
VANU [18] base stations, as well as some recent announcements from Huawei [143] and
ZTE [144], are SDR examples.
Further background on SDR base station opportunities is provided in [58, 142].
The share of SDR base stations compared to the overall number is predicted to grow
significantly from 2010, to become an approximate equal share of the overall number of
base stations in 2016 and continue rising [145].
7.2.2
Mobile Multi-standard Terminals
Mobile Multi-standard Terminals (MMTs) represent another large market opportunity for
SDR. As the number of standards needing to be served [141] by the MMT grows, SDR
will at some point provide a cost advantage relative to a conventionally designed MMT.
Further it provides opportunities for future mobile wireless users to change and personalize
their units by installing additional pieces of waveform software, and upgrade their units
as new standards emerge or as standards are updated. More importantly, with the future
reconfigurable and cognitive radio networks it will be a necessity for the units to be able to
add waveform applications or components dynamically.
MMTs are still almost exclusively using traditional non-SDR designs, utilizing waveform standard specific integrated HW [58] even if the terminals serve a high number of
waveform standards (e.g. GSM, EDGE, W-CDMA, HSDPA, Bluetooth, WiFi). A multimode mobile phone with ’software-defined modem’ processing up to 2.8 Mbps has been
demoed [146]. This is possibly an important milestone in the SDR direction. Technical
details about its SW flexibility and power-consumption are however unknown.
MMTs presently are also characterized by a relatively small amount of dominant manufacturers having a high degree of vertical integration and proprietary solutions, e.g. being
responsible both for the hardware platform and the waveform software, and which have
an interest in maintaining this business model. There are, however, some signs of interfaces being opened up and value chain restructuring. An obvious observation is the trend
of employing third-party operating systems (e.g. the Symbian OS) allowing third-party user
applications to be loaded. This has an effect in making end users accustomed to adding SW
applications to their units.
So far, the user demand for field upgrading waveform application software on mobile
handsets has been limited, simply due to the fact that the handsets are frequently replaced
(the ’handset replacement’ model [58], since the market is currently driven also by a lot of
other factors than the waveform standards, e.g. improved platform devices such as cameras
and displays. Several authors, however, predict a change to the ’handset service upgrade’
[58] and ’personalization’ [147] model where a ’naked handset’ [58] is uploaded to suit the
user’s needs.
The mainstream MMT evolution into SDR-based design is dependent on several factors,
the most important being power consumption and cost, with the cost trade-off being highly
dependent on the number of waveform standards that the terminal is intended to serve. Due
to these factors being more significant for MMTs than for base stations, the MMTs are predicted to come after the base station development in the transition to SDR design. However,
since the MMT market has high price pressure, this implies that as soon as the SDR ap128
8. Conclusions
proach gives a cost advantage, and assuming acceptable power consumption, there will be a
very significant drive in this direction.
7.2.3
Cognitive Radio
The projected evolution into CR capable MPMBs and MMTs represents a large future market opportunity and driver for SDR technology.
CRs may both provide context-aware services for the user [148] and improve spectrum
utilization through dynamic spectrum access [135, 137, 149]. In order to continuously take
advantage of spectrum opportunities and adapt to the specific context, CR requires platforms
that have fast dynamic reconfiguration abilities. Recommended sources of information on
CR and the application of SDR in CR systems are [136, 138].
While the first commercial-domain CR standard is already drafted [150], more advanced
CRs are viewed as being further into the future.
7.2.4
Other Commercial Domain Opportunities
Commercial Satellite Communications has already been mentioned as a segment that will
benefit from SDR, where SDR enables remote upgrades and possibly multiple uses during
the lifetime of a satellite [17]. Equipment to be located in remote and poorly accessible
locations on earth is another similar opportunity.
The Femtocell or Home Base Station has been put forward as another market segment
with great opportunities for SDR [58, 151]. The reconfigurability and flexibility provided by
SDR support the multiple bands, multiple standards and simultaneous ’sniffing’ functionality
needed in the Femtocells [151]. Other mentioned market opportunities include devices for
laptops, automobiles, home entertainment and the medical and public safety segments [58].
8
Conclusions
Although SDR technology has evolved more slowly than anticipated some years ago, there
are now many positive signs, the clearest ones being in the form of SDR products entering
the market. Several major initiatives, at national and cooperative levels between nations and
the industry are paving the way for SDR.
The increasing availability of SCA SW tools and development platforms is contributing
to reducing the learning threshold of the SCA and also increase the productivity of SDR
development. Developments within Model Driven Design may further increase this productivity.
The SCA eases portability by providing a standard for deploying and managing applications. Even so the portability of SCA-based applications between different platforms is not
straightforward. One major issue is the standardization of the APIs between the application
and the system devices and services. Although a subset of the needed APIs have been published on the JTRS website, parts of the APIs will be difficult to standardize across domains,
for example the security-related APIs. Another major issue is the instruction code compatibility between different processing elements, which at present requires porting efforts in
terms of rewriting code to fit the processing elements of the target platform. It is expected
in the long term that design in higher abstraction languages will reduce this type of porting
effort.
129
A. S OFTWARE D EFINED R ADIO : C HALLENGES AND O PPORTUNITIES
Alternatives to the SCA include OMG’s specification, NASA’s STRS architecture and
the GNU Radio architecture. MOM-based architectures have a potential of becoming alternatives, but the maturity and the acceptance of these specifications have to be demonstrated. For cognitive radio systems, additions to the SCA, such as middleware that supports
adaptation, will be beneficial. Such middleware will increase productivity and standardize
solutions when making adaptive and cognitive systems.
It is expected that the SCA will remain the dominating architecture in the military sector
where waveform application portability and reuse are major priorities, especially through
cooperative programmes. On the other hand, a significant portion of designs for the civilian
commercial market, where hardware cost is a major factor, are likely to utilize dedicated
and proprietary lighter-weight architectures. In a longer (∼10 years) perspective, and as
hardware cost progressively becomes a smaller part of the total system cost, standardized
open architectures are likely to become more popular also on the civilian commercial market.
A fundamental challenge for SDR designs is that of providing sufficient computational
performance for the signal processing tasks and within the relevant size weight and power
requirements. This is particularly challenging for small handheld units, and for ubiquitous
units. Parallel computation enhancements and the rapid evolvement of DSP and FPGA performance help to provide this computational performance. Processing units having multiple
SIMD processing elements appear to be very promising for low-power SDR units. Also, as
waveforms typically have many common functions, it may be sensible to make parameterized, optimal low-power-consumption dedicated hardware blocks for these common functions, and run alternative source code on a more general processing element if they do not
exist.
The reconfigurability of SDR systems has security challenges as a side effect. One such
security challenge is that the system must be protected from loading unauthorized and/or
malicious code. Also, the rigidity of conventional security architectures in many ways contrast the desired flexibility and portability ideally required for SDR. The MILS architecture
provides for larger flexibility and easier portability of security related modules, while offering multiple security-levels without the need for multiple sets of HW.
SDR has forced regulators to rethink the certification of radio equipment. While traditional equipment has a fixed number of functional modes and a more or less fixed design that
may be fully characterized, SDRs may be SW loaded to function in a large variety of modes
and hence may not be tested in every possible mode at the time of the initial certification.
Changes in certification rules to deal with SDRs have taken place, but it is likely that as more
SDR products approach the market, there will be a further evolvement of these rules.
The multitude of waveform standards and their rapid progress make it beneficial and
economical to be able to easily update wireless network infrastructure equipment, such as
cellular base stations. Also, base stations are less sensitive to the power consumption of the
SDR processing platforms than the mobile devices. Thus SDR has promising potential in
commercial wireless network infrastructure equipment.
SDR has the potential to increase the productivity of radio communication development
and lower the lifecycle costs of radio communication. This will partly come through a
change in the business models in the radio communication industry, allowing a separation
into SDR platform providers and third-party SW providers. This again will provide volume
benefits for the platforms and lower the threshold for companies entering the market as SW
providers, and hence provide further competition in the SDR SW applications area.
SDR will have continued focus as a highly flexible platform to meet the demands from
130
References
military organizations facing the requirements from network centric and coalitional operations. SDR will also have continued focus as a convenient platform for future cognitive radio
networks, enabling more information capacity for a given amount of spectrum and have the
ability to adapt on-demand to waveform standards.
Acknowledgment
The author would like to thank Christian Serra at Thales Communications, Marc Adrat at
FGAN (the Research Establishment for Applied Science), Jon Olavsson Neset and Stewart
Clark at NTNU (Norwegian University of Science and Technology), Audun Jøsang at UniK
(University Graduate Center at Kjeller), Frank Eliassen at UiO (University of Oslo) and
Torleiv Maseng, Tor Gjertsen, Asgeir Nysæter and Synnøve Eifring at FFI (the Norwegian
Defence Research Establishment) for their valuable input and advice. The author would also
like to thank the anonymous reviewers for their helpful comments.
References
[1]
J. Mitola III, “Software radios - survey, critical evaluation and future directions,” in
National Telesystems Conference, 1992. NTC-92, pp. 13/15–13/23, May 1992.
[2]
W. Tuttlebee, Software Defined Radio: Enabling Technologies.
2002.
[3]
P. B. Kenington, RF and Baseband Techniques for Software Defined Radio. Boston:
Artech House, 2005.
[4]
Software Defined Radio Forum. SDR Forum. (Oct. 2007). [Online]. Available:
http://www.sdrforum.org
[5]
Federal Communications Commission, “In the Matter of Facilitating Opportunities
for Flexible, Efficient, and Reliable Spectrum Use Employing Cognitive Radio Technologies, Report and Order,” Mar. 2005, FCC 05-57, ET Docket No. 03-108.
[6]
P. G. Cook and W. Bonser, “Architectural overview of the SPEAKeasy system,” IEEE
Journal on Selected Areas in Communications, vol. 17, no. 4, pp. 650–661, 1999.
[7]
United States Government Accountability Office, “Restructured JTRS Program Reduces Risk, but Significant Challenges Remain,” Sep. 2006.
[8]
D. R. Stephens, B. Salisbury, and K. Richardson, “JTRS infrastructure architecture
and standards,” in Military Communications Conference, 2006. MILCOM 2006, pp.
1–5. IEEE, Oct. 2006.
[9]
R. North, N. Browne, and L. Schiavone, “Joint tactical radio system - connecting the
GIG to the tactical edge,” in Military Communications Conference, 2006. MILCOM
2006, pp. 1–6. IEEE, Oct. 2006.
Chichester: Wiley,
131
R EFERENCES
[10] G. Gailliard, E. Nicollet, M. Sarlotte, and F. Verdier, “Transaction level modelling
of SCA compliant software defined radio waveforms and platforms PIM/PSM,” in
Design, Automation & Test in Europe Conference & Exhibition, 2007.DATE ’07, pp.
1–6, Apr. 2007.
[11] C. Serra, B. Sourdillat, and E. Nicollet, “SDR key factors - advanced studies & results
related to the implementations of the SCA,” in 2006 Software Defined Radio Technical
Conference and Product Exposition, Nov. 2006.
[12] H. S. Kenyon. Spanish Research Platform Ready For Service. (Sep.
2007). [Online]. Available:
http://www.afcea.org/signal/articles/templates/
SIGNAL Article Template.asp?articleid=1383&zoneid=47
[13] T. Tuukkanen, A. Pouttu, and P. Leppȧnen. Finnish Software Radio Programme.
(June 2007). [Online]. Available: http://www.mil.fi/paaesikunta/materiaaliosasto/
liitteet/finnish software radio programme.pdf
[14] Swedish Defence Materiel Organization. A New Generation Radio. (Apr. 2008).
[Online]. Available: http://www.fmv.se/WmTemplates/news.aspx?id=4104
[15] A. Baddeley. Sweden Seeks Military Communications Flexibility. (May
2006). [Online]. Available:
http://www.afcea.org/signal/articles/templates/
SIGNAL Article Template.asp?articleid=1129&zoneid=7
[16] European Defence Agency, “Background on Software Defined Radio,” Nov. 2007.
[17] F. Daneshgaran and M. Laddomada, “Transceiver front-end technology for software
radio implementation of wideband satellite communication systems,” Wireless Personal Communications, vol. 24, no. 2, pp. 99–121, 2003.
[18] Vanu Inc. Vanu’s Software Radio. (2007). [Online]. Available: http://www.vanu.com
[19] R. H. Walden, “Analog-to-digital converter survey and analysis,” IEEE Journal on
Selected Areas in Communications, vol. 17, no. 4, pp. 539–550, Apr. 1999.
[20] J. Mitola, V. Bose, B. M. Leiner, T. Turletti, and D. Tennenhouse, “Special issue on
software radio,” IEEE Journal on Selected Areas in Communications, vol. 17, no. 4,
Apr. 1999.
[21] A. A. Abidi, “The path to the software-defined radio receiver,” IEEE Journal of SolidState Circuits, vol. 42, no. 5, pp. 954–966, May 2007.
[22] M. Laddomada, F. Daneshgaran, M. Mondin, and R. M. Hickling, “A PC-based software receiver using a novel front-end technology,” IEEE Communications Magazine,
vol. 39, no. 8, pp. 136–145, Aug. 2001.
[23] E. Buracchini, “The software radio concept,” IEEE Communications Magazine,
vol. 38, no. 9, pp. 138–143, Sep. 2000.
[24] M. Laddomada, “Generalized comb decimation filters for sigma-delta A/D converters: Analysis and design,” IEEE Transactions on Circuits and Systems I: Regular
Papers, vol. 54, no. 5, pp. 994–1005, May 2007.
132
References
[25] (JTRS), JTRS Standards Joint Program Executive Office (JPEO) Joint Tactical Radio
System, “Software communications architecture specification,” May 2007.
[26] JTRS Standards Joint Program Executive Office (JPEO) Joint Tactical Radio System
(JTRS). Software Communications Architecture SCA Document Downloads. (Sep.
2007). [Online]. Available: http://sca.jpeojtrs.mil/downloads.asp?ID=2.2.2
[27] Object Management Group Inc. The Software-Based Communication (SBC) Domain
Task Force (DTF). (Sep. 2007). [Online]. Available: http://sbc.omg.org
[28] E. Nicollet, “The transceiver facility platform independent model (PIM): Definition,
content and usage,” in 2006 Software Defined Radio Technical Conference and Product Exposition, Nov. 2006.
[29] Vecima Networks Inc. Spectrum Signal Processing. (2007). [Online]. Available:
http://www.spectrumsignal.com
[30] F. Humcke, “Making FPGAs “first class” SCA citizens,” in 2006 Software Defined
Radio Technical Conference and Product Exposition, Nov. 2006.
[31] C. McHale. Corba explained simply. (Feb. 2007). [Online]. Available:
//www.ciaranmchale.com/
http:
[32] C. Magsombol, C. Jimenez, and D. R. Stephens, “Joint tactical radio system - application programming interfaces,” in Military Communications Conference, 2007.
MILCOM 2007., pp. 1–7. IEEE, Oct. 2007.
[33] A. Blair, T. Brown, J. Cromwell, S. Kim, and R. Milne, “Porting lessons learned
from soldier radio waveform (SRW),” in Military Communications Conference, 2007.
MILCOM 2007., pp. 1–6. IEEE, Oct. 2007.
[34] S. Bernier and J. P. Z. Zapata, “The deployment of software components into heterogeneous SCA platforms,” in SDR’08 Technical Conference and Product Exposition,
Oct. 2008.
[35] T. Kempf, E. M. Witte, V. Ramakrishnan, G. Ascheid, M. Adrat, and M. Antweiler,
“A practical view on SDR baseband processing portability,” in SDR’08 Technical
Conference and Product Exposition, Oct. 2008.
[36] Y. Zhang, S. Dyer, and N. Bulat, “Strategies and insights into SCA-compliant waveform application development,” in Military Communications Conference, 2006.MILCOM 2006., pp. 1–7. IEEE, Oct. 2006.
[37] Zeligsoft Inc. Zeligsoft. (2007). [Online]. Available: http://www.zeligsoft.com
[38] PrismTech. PrismTech Productivity Tools and Middleware. (2007). [Online].
Available: http://www.prismtech.com/
[39] Communications Research Canada. SCARI Software Suite. (Aug. 2008). [Online]. Available: http://www.crc.gc.ca/en/html/crc/home/research/satcom/rars/sdr/
products/scari suite/scari suite
133
R EFERENCES
[40] VirginiaTech. OSSIE development site for software-defined radio. (Dec. 2007).
[Online]. Available: http://ossie.wireless.vt.edu/trac
[41] S.-P. Lee, M. Hermeling, and C.-M. Poh, “Experience report: Rapid model-driven
waveform development with UML,” in SDR’08 Technical Conference and Product
Exposition, Oct. 2008.
[42] B. Hailpern and P. Tarr, “Model-driven development: The good, the bad, and the
ugly,” IBM Systems Journal, vol. 45, no. 3, pp. 451–461, July 2006.
[43] Z. Jianfan, D. Levy, and A. Liu, “Evaluating overhead and predictability of a real-time
CORBA system,” in Proceedings of the 37th Annual Hawaii International Conference
on System Sciences - 2004, p. 8, Jan. 2004.
[44] F. Casalino, G. Middioni, and D. Paniscotti, “Experience report on the use of CORBA
as the sole middleware solution in SCA-based SDR environments,” in SDR’08 Technical Conference and Product Exposition, Oct. 2008.
[45] J. Kim, S. Hyeon, and S. Choi, “Design and implementation of high-speed data transfer protocol in CORBA enviroment,” in SDR’08 Technical Conference and Product
Exposition, Oct. 2008.
[46] A. S. Gokhale and D. C. Schmidt, “Evaluating CORBA latency and scalability over
high-speed ATM networks,” in Proceedings of the 17th International Conference on
Distributed Computing Systems, 1997, pp. 401–410, May 1997.
[47] J. Bertrand, J. W. Cruz, B. Majkrzak, and T. Rossano, “CORBA delays in a softwaredefined radio,” IEEE Communications Magazine, vol. 40, no. 2, pp. 152–155, Feb.
2002.
[48] T. Ulversøy and J. O. Neset, “On workload in an SCA-based system, with varying
component and data packet sizes,” in RTO-MP-IST-083 Military Communications
with a Special Focus on Tactical Communications for Network Centric Operations,
Apr. 2008.
[49] P. J. Balister, C. Dietrich, and J. H. Reed, “Memory usage of a software communication architecture waveform,” in 2007 Software Defined Radio Technical Conference
and Product Exposition, Nov. 2007.
[50] D. Paniscotti and J. Bickle, “SDR signal processing distributive-development approaches,” in 2007 Software Defined Radio Technical Conference and Product Exposition, Nov. 2007.
[51] D. R. Stephens, C. Magsombol, and C. Jimenez, “Design patterns of the JTRS infrastructure,” in Military Communications Conference, 2007. MILCOM 2007., pp. 1–5.
IEEE, Oct. 2007.
[52] C. Lee, J. Kim, S. Hyeon, and S. Choi, “FPGA design to support a Corba component,”
in SDR’08 Technical Conference and Product Exposition, Oct. 2008.
134
References
[53] D. Vassilopoulos, T. Pilioura, and A. Tsalgatidou, “Distributed technologies CORBA,
Enterprise JavaBeans, Web services: A comparative presentation,” in 14th Euromicro
International Conference on Parallel, Distributed, and Network-Based Processing,
2006. PDP 2006., p. 5, Feb. 2006.
[54] Object Management Group Inc. The Object Management Group (OMG). (Oct.
2006). [Online]. Available: http://www.omg.org
[55] Q. H. Mahmoud, Middleware for Communications.
Chichester: Wiley, 2004.
[56] OMG. Documents associated with PIM and PSM for Software Radio Components
(SDRP), v1.0. (Mar. 2007). [Online]. Available: http://www.omg.org/spec/SDRP/1.0/
[57] S. Nagel, V. Blaschke, J. Elsner, F. K. Jondral, and D. Symeonidis, “Certification
of SDRs in new public and governmental security systems,” in SDR’08 Technical
Conference and Product Exposition, Oct. 2008.
[58] A. Kaul, “Software defined radio: The transition from defense to commercial markets,” in 2007 Software Defined Radio Technical Conference and Product Exposition,
Nov. 2007.
[59] T. Quinn and T. Kacpura, “Strategic adaptation of SCA for STRS,” in 2006 Software
Defined Radio Technical Conference and Product Exposition, Nov. 2006.
[60] T. J. Kacpura, L. M. Handler, J. C. Briones, and C. S. Hall, “Updates to the NASA
space telecommunications radio system (STRS) architecture,” in 2007 Software Defined Radio Technical Conference and Product Exposition, Nov. 2007.
[61] J. C. Briones, L. M. Handler, C. S. Hall, R. C. Reinhart, and T. J. Kacpura, “Case
study: Using the OMG SWRadio profile and SDR Forum input for NASA’s space
telecommunications radio system,” in SDR’08 Technical Conference and Product Exposition, Oct. 2008.
[62] S. B. Raghunandan, D. Kumaraswamy, L. Le, C. B. Dietrich, and J. H. Reed, “Open
space radio: An open source implementation of STRS 1.01,” in SDR’08 Technical
Conference and Product Exposition, Oct. 2008.
[63] Free Software Foundation Inc. GNU Radio - The GNU Software Radio. (Mar. 2007).
[Online]. Available: http://www.gnu.org/software/gnuradio/index.html
[64] G. Abgrall, F. L. Roy, J.-P. Delahaye, J.-P. Diguet, and G. Gogniat, “A comparative
study of two software defined radio platforms,” in SDR’08 Technical Conference and
Product Exposition, Oct. 2008.
[65] E. Gjorven, F. Eliassen, K. Lund, V. S. W. Eide, and R. Staehli, “Self-adaptive systems: A middleware managed approach,” Self-Managed Networks, Systems, and Services, Proceedings, vol. 3996, pp. 15–27, 2006.
[66] A. P. Vinod and E. M. K. Lai, “Low power and high-speed implementation of FIR
filters for software defined radio receivers,” IEEE Transactions on Wireless Communications, vol. 5, no. 7, pp. 1669–1675, 2006.
135
R EFERENCES
[67] L. Yuan, L. Hyunseok, M. Woh, Y. Harel, S. Mahlke, T. Mudge, C. Chakrabarti, and
K. Flautner, “SODA: A low-power architecture for software radio,” in 33rd International Symposium on Computer Architecture, 2006. ISCA ’06., pp. 89–101, 2006.
[68] John L. Shanton III and H. Wang, “Design considerations for size, weight and power
(SWAP) constrained radios,” in 2006 Software Defined Radio Technical Conference
and Product Exposition, Nov. 2006.
[69] J. Bjorkqvist and S. Virtanen, “Convergence of hardware and software in platforms
for radio technologies,” IEEE Communications Magazine, vol. 44, no. 11, pp. 52–57,
Nov. 2006.
[70] Texas Instruments Incorporated. Texas Instruments. (Oct. 2008). [Online]. Available:
http://www.ti.com
[71] D. Lau, J. Blackburn, and C. Jenkins, “Using c-to-hardware acceleration in FPGAs
for waveform baseband processing,” in 2006 Software Defined Radio Technical Conference and Product Exposition, Nov. 2006.
[72] S. Srikanteswara, R. C. Palat, J. H. Reed, and P. Athanas, “An overview of configurable computing machines for software radio handsets,” IEEE Communications
Magazine, vol. 41, no. 7, pp. 134–141, July 2003.
[73] D. Efstathiou, L. Fridman, and Z. Zvonar, “Recent developments in enabling technologies for software defined radio,” IEEE Communications Magazine, vol. 37, no. 8,
pp. 112–117, Aug. 1999.
[74] J. A. Kahle, M. N. Day, H. P. Hofstee, C. R. Johns, T. R. Maeurer, and
D. Shippy, “Introduction to the Cell microprocessor,” IBM Journal of Research
and Development, vol. 49, no. 4/5, pp. 589–604, Sep. 2005. [Online]. Available:
http://www.research.ibm.com/journal/rd/494/kahle.pdf
[75] The Cell project at IBM Research, The Cell Chip. (Oct. 2007). [Online]. Available:
http://www.research.ibm.com/cell/cell chip.html
[76] J. Pille, C. Adams, T. Christensen, S. Cottier, S. Ehrenreich, T. Kono, D. Nelson,
O. Takahashi, S. Tokito, O. Torreiter, O. Wagner, and D. Wendel, “Implementation of
the CELL broadband engine in a 65nm SOI technology featuring dual-supply SRAM
arrays supporting 6GHz at 1.3V,” in IEEE International Solid-State Circuits Conference, 2007. ISSCC 2007. Digest of Technical Papers., p. 322, Feb. 2007.
[77] O. Takahashi, C. Adams, D. Ault, E. Behnen, O. Chiang, S. R. Cottier, P. Coulman,
J. Culp, G. Gervais, M. S. Gray, Y. Itaka, C. J. Johnson, F. Kono, L. Maurice, K. W.
McCullen, L. Nguyen, Y. Nishino, H. Noro, J. Pille, M. Riley, M. Shen, C. Takano,
S. Tokito, T. Wagner, and H. Yoshihara, “Migration of Cell broadband engine from
65nm SOI to 45nm SOI,” in IEEE International Solid-State Circuits Conference,
2008. ISSCC 2008. Digest of Technical Papers., p. 86, Feb. 2008.
[78] P. Westermann, G. Beier, H. it Harma, and L. Schwoerer, “Performance analysis of
W-CDMA algorithms on a vector DSP,” in 4th European Conference on Circuits and
Systems for Communications, 2008. ECCSC 2008., pp. 307–311, July 2008.
136
References
[79] K. van Berkel, F. Heinle, P. P. E. Meuwissen, K. Moerman, and M. Weiss, “Vector
processing as an enabler for software-defined radio in handheld devices,” EURASIP
Journal on Applied Signal Processing, vol. 2005, no. 16, pp. 2613–2625, Jan. 2005.
[80] J. Glossner, D. Iancu, M. Moudgill, G. Nacer, S. Jinturkar, S. Stanley, and M. Schulte,
“The Sandbridge SB3011 platform,” EURASIP Journal on Embedded Systems, vol.
2007,, Jan 2007.
c chipsets. (2008). [Online]. Available: http://www.icerasemi.com/
[81] ICERA. Livanto
livanto-chipsets.php
[82] M. Woh, S. Seo, H. Lee, Y. Lin, S. Mahlke, T. Mudge, C. Chakrabarti,
and K. Flautner, “The next generation challenge for software defined radio,”
in Embedded Computer Systems: Architectures, Modeling, and Simulation,
pp. 343–354. Springer Berlin / Heidelberg, July 2007. [Online]. Available:
http://dx.doi.org/10.1007/978-3-540-73625-7 36
[83] M. D. McCool, “Signal processing and general-purpose computing and GPUs,” IEEE
Signal Processing Magazine, vol. 24, no. 3, pp. 110–115, 2007.
[84] F. Stefani, A. Moschitta, D. Macii, and D. Petri, “FFT Benchmarking for Digital
Signal Processing Technologies,” University of Trento, Department of Information
and Communication Technology, Tech. Rep., Mar. 2004.
[85] BDTI. BDTI - Insight, Analysis, and Advice on Signal Processing Technology.
(2008). [Online]. Available: http://www.bdti.com/
[86] H. Harada, “Software defined radio prototype for W-CDMA and IEEE802.11a wireless LAN,” in IEEE 60th Vehicular Technology Conference, 2004.VTC2004-Fall.,
vol. 6, pp. 3919–3924, Sep. 2004.
[87] J. Neel, S. Srikanteswara, J. H. Reed, and P. M. Athanas, “A comparative study of
the suitability of a custom computing machine and a VLIW DSP for use in 3G applications,” in IEEE Workshop on Signal Processing Systems, 2004. SIPS 2004., pp.
188–193, Oct. 2004.
[88] S. Rajagopal and J. R. Cavallaro. Communication Processors. (July 2005). [Online].
Available: http://hdl.handle.net/1911/20241
[89] GSM Association. GSM World. (2008). [Online]. Available: http://www.gsmworld.
com
[90] K. Keutzer, “Digital Signal Processors: A TI Architectural History,” 2000. [Online].
Available: http://bwrc.eecs.berkeley.edu/classes/cs252/Notes/Lec10a-DSP1.ppt
[91] M. Etelȧperȧ and J.-P. Soininen, “4G Mobile Terminal Architectures,” Technical
Research Centre of Finland (VTT), Tech. Rep., Sep. 2007. [Online]. Available:
rooster.oulu.fi/materiaalit/4G%20terminal%20architectures.pdf
[92] D. Babb, C. Bishop, and T. E. Dodgson, “Security issues for downloaded code in
mobile phones,” Electronics & Communication Engineering Journal, vol. 14, no. 5,
pp. 219–227, Oct. 2002.
137
R EFERENCES
[93] M. Cummings and S. Heath, “Mode switching and software download for software
defined radio: the SDR Forum approach,” IEEE Communications Magazine, vol. 37,
no. 8, pp. 104–106, Aug. 1999.
[94] M. Laddomada, “Reconfiguration issues of future mobile software radio platforms,”
Wireless Communications & Mobile Computing, vol. 2, no. 8, pp. 815–826, Dec.
2002.
[95] E. Gallery and C. Mitchell, “Trusted computing technologies and their use in the provision of high assurance SDR platforms,” in 2006 Software Defined Radio Technical
Conference and Product Exposition, Nov. 2006.
[96] Federal Communications Commission, “In the matter of Authorization and Use of
Software Defined Radios, First Report and Order,” Sep. 2001, FCC 01-264, ET
Docket No. 00-47.
[97] R. Falk, F. Haettel, U. Lūcking, and E. Mohyeldin, “Authorization of SDR software
using common security technology,” in 2005 Software Defined Radio Technical Conference and Product Exposition, Nov. 2005.
[98] L. B. Michael, M. J. Mihaljevic, S. Haruyama, and R. Kohno, “A framework for secure download for software-defined radio,” IEEE Communications Magazine, vol. 40,
no. 7, pp. 88–96, July 2002.
[99] R. Falk, P. Bender, N. J. Drew, and J. Faroughi-Esfahani, “Conformance and security
challenges for personal communications in the reconfigurable era,” in 4th International Conference on 3G Mobile Communication Technologies, 2003. 3G 2003., pp.
7–12, June 2003.
[100] H. Shiba, K. Uehara, and K. Araki, “Proposal and evaluation of security schemes for
software-defined radio,” in 14th IEEE Proceedings on Personal, Indoor and Mobile
Radio Communications, 2003. PIMRC 2003., vol. 1, pp. 114–118, Sep. 2003.
[101] M. Kurdziel, J. Beane, and J. J. Fitton, “An SCA security supplement compliant radio architecture,” in Military Communications Conference, 2005.MILCOM 2005., pp.
2244–2250. IEEE, Oct. 2005.
[102] W. T. Scott, A. C. Houle, and A. Martin, “Information assurance issues for an SDR
operating in a manet network,” in 2006 Software Defined Radio Technical Conference
and Product Exposition, Nov. 2006.
[103] D. S. Dawoud, “A proposal for secure software download in SDR,” in AFRICON,
2004.7th AFRICON Conference in Africa, pp. 77–82, Sep. 2004.
[104] R. Falk and M. Dillinger, “Approaches for secure SDR software download,” in 2004
Software Defined Radio Technical Conference and Product Exposition, Nov. 2004.
[105] M. J. Mihaljevic and R. Kohno, “On a framework for employment of cryptographic
components in software defined radio,” in The 5th International Symposium on Wireless Personal Multimedia Communications, 2002., vol. 2, pp. 835–839, Oct. 2002.
138
References
[106] C. Y. Yeun and T. Farnham, “Secure software download for programmable mobile
user equipment,” in Third International Conference on 3G Mobile Communication
Technologies, 2002., pp. 505–510, May 2002.
[107] C. Jenkins and C. Plante, “Military anti-tampering solutions using programmable
logic,” in 2006 Software Defined Radio Technical Conference and Product Exposition, Nov. 2006.
[108] TCG. Trusted Computing Group. (2007). [Online]. Available:
trustedcomputinggroup.org/home
https://www.
[109] C. Mitchell, Trusted Computing. London: Institution of Electrical Engineers, 2005.
[110] J. , C. Taylor, and P. Oman, “A multi-layered approach to security in high assurance
systems,” in Proceedings of the 37th Annual Hawaii International Conference on
System Sciences (HICSS’04) - Track 9, 2004., vol. 9, Jan. 2004.
[111] B. Randell and J. Rushby, “Distributed secure systems: Then and now,” in 2007 Annual Computer Security Applications Conference (ACSAC 23), Dec. 2007.
[112] J. Rushby, “Design and verification of secure systems,” in Proceedings of the Eighth
Symposium on Operating System Principles, Dec. 1981.
[113] C. Taylor, “MILS, multiple independent levels of security: A high assurance
architecture,” May 2006. [Online]. Available: http://cisr.nps.navy.mil/guests/
taylor06.html
[114] Common Criteria portal, The official website of the Common Criteria Project.
(2007). [Online]. Available: http://www.commoncriteriaportal.org/
[115] Green Hills Software Inc. Green Hills Software Announces World’s First
EAL6+ Operating System Security Certification. (Nov. 2008). [Online]. Available:
http://www.ghs.com/news/20081117 integrity EAL6plus security.html
[116] J. Alves-Foss, P. W. Oman, C. Taylor, and W. S. Harrison, “The MILS architecture
for high-assurance embedded systems,” International Journal of Embedded Systems,
vol. 2, no. 3/4, pp. 239–247, 2006.
[117] C. Boettcher, R. DeLong, J. Rushby, and W. Sifre, “The MILS component integration
approach to secure information sharing,” in The 27th IEEE/AIAA Digital Avionics
Systems Conference (DASC), Oct. 2008.
[118] Federal Communications Commission, “Cognitive Radio Technologies and Software
Defined Radios,” Jun. 2007, 47 CFR Part 2 [ET Docket No. 03-108; FCC 07-66].
[119] SDR Forum, “SDR Forum Response to FCC MOO,” June 2007.
[120] T. Moore, “Developing an interoperable coalition communication strategy using suite
B,” in Military Communications Conference, 2007. MILCOM 2007., pp. 1–4. IEEE,
Oct. 2007.
139
R EFERENCES
[121] J. Giacomoni and D. C. Sicker, “Difficulties in providing certification and assurance
for software defined radios,” in First IEEE International Symposium on New Frontiers
in Dynamic Spectrum Access Networks, 2005. DySPAN 2005., pp. 526–538, Nov.
2005.
[122] IEEE Standards Association. IEEE Standards Coordinating Committee 41 (Dynamic
Spectrum Access Networks). (Sep. 2007). [Online]. Available: http://www.scc41.org/
[123] P. Martigne, B. Deschamps, K. Moessner, G. de Brito, K. El-Khazen, and D. Bourse,
“Regulatory trends for reconfigurable end-to-end systems,” in IST Summit 06, June
2006.
[124] P. Bender, “Regulatory Aspects of Software Defined Radio,” Feb. 2007. [Online].
Available: http://www.etsi.org/website/document/workshop/softwaredefinedradio/
sdrworkshop4-1paulbender.pdf
[125] D. Bourse, W. Koenig, M. Doubrava, J.-M. Temerson, K. Nolte, I. Gaspard, K. Kalliojarvi, E. Buracchini, and D. Bateman, “FP7 EU project: Technical, business, standardization and regulatory perspectives,” in SDR’08 Technical Conference and Product Exposition, Oct. 2008.
[126] Joint Program Executive Office Joint Tactical Radio System, “JPEO JTRS Announces
’Business Model’ for Certifying JTRS Radios,” Jan. 2007.
[127] H. Rantanen, “SCA test, evaluation and certification - the perspective of the Finnish
software radio programme,” Apr. 2008.
[128] T. A. Sturman, M. D. J. Bowyer, and N. R. Petfield, “Skynet 5: MILSATCOM using SDR,” in Military Communications Conference, 2007. MILCOM 2007., pp. 1–7.
IEEE, Oct. 2007.
[129] M. S. Hasan, M. LaMacchia, L. Muzzelo, R. Gunsaulis, L. R. Housewright, and
J. Miller, “Designing the joint tactical radio system (JTRS) handheld, manpack, and
small form fit (HMS) radios for interoperable networking and waveform applications,” in Military Communications Conference, 2007. MILCOM 2007., pp. 1–6.
IEEE, Oct. 2007.
[130] Defence Update. Harris, Thales Compete Multi-Billion JTRS Radio Procurement.
(June 2007). [Online]. Available: http://www.defense-update.com/newscast/0607/
news/240607 jtrs.htm
[131] K. Bailey, “JTRS and COALWNW Overview for MILCIS 2008,” Nov. 2008.
[132] United States Government Accountability Office, “Department of Defense Needs
Framework for Balancing Investments in Tactical Radios,” Aug. 2008.
[133] C. Serra, “Evolution of SDR Standards - The SDR Forum Perspective,” Nov. 2007,
Key note at 2007 Software Defined Radio Technical Conference and Product Exposition (presentation slides).
[134] European Defence Agency and OCCAR, “European Secure Software Defined Radio:
Contract Launced,” Dec. 2008.
140
References
[135] I. F. Akyildiz, W.-Y. Lee, M. C. Vuran, and S. Mohanty, “Next generation/dynamic
spectrum access/cognitive radio wireless networks: A survey,” Computer Networks,
vol. 50, no. 13, pp. 2127–2159, May 2006.
[136] B. A. Fette, Cognitive Radio Technology.
Amsterdam: Newnes/Elsevier, 2006.
[137] S. Haykin, “Cognitive Radio: Brain-Empowered Wireless Communications,” IEEE
Journal on Selected Areas in Communications, vol. 23, no. 2, pp. 201–220, Feb.
2005.
[138] F. K. Jondral, “Software-defined radio – basics and evolution to cognitive radio,”
EURASIP Journal on Wireless Communications and Networking, vol. 2005, no. 3,
pp. 275–283, Apr. 2005.
[139] C. Tran, R. P. Lu, A. D. Ramirez, R. C. Adams, T. O. Jones, C. B. Phillips, and S. Thai,
“Dynamic spectrum access: Architectures and implications,” in Military Communications Conference, 2008. MILCOM 2008., pp. 1–7. IEEE, Nov. 2008.
[140] T. Yücek and H. Arslan, “A survey of spectrum sensing algorithms for cognitive radio
applications,” IEEE Communications Surveys and Tutorials, vol. 11, no. 1, Jan. 2009.
[141] A. Gatherer, “Market and technology drivers for cellular infrastructure modem development,” Wireless Personal Communications, vol. 38, no. 1, pp. 43–53, June 2006.
[142] S. Jennis and P. Burns, “What does mainstream telecom need to embrace ’true
SDR’?” in SDR’08 Technical Conference and Product Exposition, Oct. 2008.
[143] Huawei. Huawei Industry First with 3G/2G Software Defined Radio (SDR) Single
RAN Product Launch. (Sep. 2008). [Online]. Available: http://www.huawei.com/
news/view.do?id=5587&cid=42
[144] ZTE. ZTE Launches First SDR Base Station at GSMA Mobile World Congress
Barcelona 2008. (Feb. 2008). [Online]. Available: http://wwwen.zte.com.cn
[145] A. Kaul, “Software Defined Radio - The Transition from Defense to Commercial
Markets,” Nov. 2007, Key note at the 2007 Software Defined Radio Technical Conference and Product Exposition (presentation slides).
[146] NXP Semiconductors. Samsung, NXP and T3G showcase world’s first TDSCDMA HSDPA/GSM multi-mode mobile phone. (Oct. 2008). [Online]. Available:
http://www.nxp.com
[147] J. Martin and F. Clermidy, “Mobile digital baseband: From configurable to evolutive
platforms,” in 16th IST Mobile and Wireless Communications Summit, 2007., pp. 1–5.
IEEE, July 2007.
[148] J. Mitola III, “Cognitive Radio An Integrated Agent Architecture for Software Defined Radio,” Doctoral dissertation, Royal Institute of Technology (KTH), SE-164 40
Kista, Sweden, May 2000.
141
R EFERENCES
[149] I. F. Akyildiz, L. Won-Yeol, M. C. Vuran, and M. Shantidev, “A survey on spectrum
management in cognitive radio networks,” IEEE Communications Magazine, vol. 46,
no. 4, pp. 40–48, Apr. 2008.
[150] C. Cordeiro, K. Challapali, D. Birru, and S. Shankar, “IEEE 802.22: An Introduction
to the First Wireless Standard based on Cognitive Radios,” Journal of Communications, vol. 1, no. 1, pp. 38–47, Apr. 2006.
[151] E. L. Org, R. J. Cyr, R. Cook, and D. Donovan, “Software defined radio - programmable transceivers for femtocells,” in SDR’08 Technical Conference and Product Exposition, Oct. 2008.
142
Paper B
On Workload in an SCA-based System, with Varying Component and Data Packet
Sizes
Tore Ulversøy and Jon Olavsson Neset
Presented at
NATO Research and Technology Organization (RTO) Information Systems Technology
Panel (IST) Symposium on Military Communications With a Special Focus on Tactical
Communications for Network Centric Operations
Prague, Czech Republic
April 21-22, 2008
Published online at:
http://www.rta.nato.int/Pubs/RDP.asp?RDP=RTO-MP-IST-083
Summary of this candidate’s contribution to this paper:
This candidate has contributed with the major parts (> 90%) of the article text, as well as the computer measurements. Further, the underlying basis for and the development of the models, and the
graphs. Also, parts of the computer programming (other parts by the co-author, J.E. Neset)
143
Abstract
An SCA-based SDR-application has components which exchange data through
ports. This enables scalability through the possibility of deployment on multiple processors, makes code reuse easier in other SDR applications, and makes it easier to
port applications. There is wide freedom as regards the granularity of the component
structure, e.g. few components with a large amount of processing in each component
or a higher number of components with less processing in each component. While a
fine structure further enhances scalability, reuse and portability, it has side-effects in
the form of increased workload overheads. Here the effects of varying this granularity
on the workload of the total system are examined. The analysis is done for a varying
number of components, while splitting the processing functional work such that the total functional processing work remains the same. Also, the effects of varying the size
of the data packets between the components are studied. The analysis is done both
by model calculations and experimental measurements. The analysis is important for
efficient utilization of the processing elements in an SDR system.
145
1. Introduction
1
Introduction
The Software Communications Architecture (SCA) is the dominant architecture for Software Defined Radio (SDR) in the military domain. SCA defines a run-time environment
for applications and allows applications to be built as compositions of components. SCA
also defines a distributed architecture, allowing the application components to be deployed
on several processing elements. The communication between the components is enabled
through port interfaces, and uses CORBA as the underlying communications middleware
between the various parts of the application running on the various processors. On processing elements where CORBA is not available, SCA prescribes the use of CORBA-adapters,
of which several solutions exist.
The splitting of SDR applications into components, with their defined input- and output
interfaces, and their defined requirements to their runtime environment, enables scalable
systems in that the various components may be deployed on additional processors as needed.
Components also make reuse of parts of an application easier, e.g. if two standards have the
same common interleaver, it makes sense to define this interleaver as one component that
can be used in the waveform applications for both standards. Generally, an application
composition with many small components increases the probability of being able to reuse
the components in other waveform applications.
The component approach, however, has side effects in the form of processor workload
overhead, where we here define the processor workload implied by a task or a group of tasks
as the fraction of available processor cycles occupied over a time period. The component
approach adds CORBA overhead through the processing of the CORBA invocations and
format conversions in the system. Also, the approach increases the number of separate processes and threads, which increases context switching in the cases where several components
are deployed on the same processor.
In the following we will consider such a case where the application granularity is so fine
that several components are deployed on the same processor. We examine this by using a
scenario where we have one CORBA-capable General-Purpose Processor (GPP) which runs
the same functional application work, but implemented as a variable number of components.
As part of the experiment, we also vary the workload of components, the size of the data
packets transferred between the components, and the data rate in the system.
2
Analysis Approach
Our aim in this work is to quantify and understand the effects of application granularity of
SCA-based applications, when the granularity is such that several components will need to
be deployed on the same CORBA-capable GPP processor. We will be strictly interested in
granularity effects on workload, i.e. we will not consider other aspects that may also be
affected, such as latency, throughput, and statistical variation of latency and throughput.
In analysing processor workload or processor system performance in general, a wide
range of analysis approaches are available, as illustrated in Figure B.1. Simple analytical
system models may provide good insight and be easy to understand, however for most practical model sizes such models will only provide a course representation of the system, and
will not capture the system behaviour in fine detail. At the other end of the scale, workload
measurements on the actual system will provide good accuracy in determining the actual
147
B. O N W ORKLOAD IN AN SCA- BASED S YSTEM , WITH VARYING C OMPONENT AND
DATA PACKET S IZES
workload, however the level of detail in the total system may result in a less clear picture
as to what are the dominant causes for the measured system performance. By analysing
and structuring the measured information however, we may also get an understanding of the
underlying dominant mechanisms.
An approach that makes it easier to capture more details and hence makes it easier to
provide a higher accuracy than an analytical model is e.g. a Petri-Net model [1]. This type
of model is however also more complex to understand and interpret. An even further model
refinement will be a simulation model, which is even more complex to build and understand,
but which has the potential of capturing finer details. If we did not have a system to perform
measurements on, a further alternative would be to build a testbed that captured the specific
characteristics we were interested in.
We have chosen in this work to use a combination of empirical analysis and simple
analytical models. Empirical analysis in the form of workload measurements on sample
applications provide quantification of granularity effects with good accuracy, and also allow
us to do detailed analysis where appropriate. The simple analytical models intend to help
provide better insight into the effects we are observing.
Increasing
clarity /
simplicity
Analytical
model
Concurrency model,
e.g. Petri-net
Simulation
model
Measurements
on testbed
Measurements on
actual system
Increasing
accuracy
F IGURE B.1: Illustration of system performance analysis methods
3
3.1
Workload Assessment Through Empirical Analysis
General
We have chosen to perform the empirical analysis using the OSSIE Core Framework (CF)
from VirginiaTech [2], which uses the omniORB [3]. Since this is an open source CF, it
gives us the advantage of having the full source code available for our analysis. The system
runs on Linux on an x86 hardware architecture. A Linux system is convenient due to both
good availability of public documentation and due to a rich variety of publicly available
analysis tools. Specifics of software and hardware are listed in Table B.1.
We have further chosen to do the empirical analysis on the basis of two groups of waveform applications. The first group is based on a sample waveform application with a low
computational load (relative to the capacity of the processor), the transmitter (TX) part of
Stanag 4285. This waveform is being used in a study in the RTO-IST-080 RTG-038 on
SDR, and the base code has been provided by Telefunken Racoms. For purpose of the study
here, the waveform processing is configured into applications with a variable number of
components, as described in section 3.2.
148
3. Workload Assessment Through Empirical Analysis
TABLE B.1: HW and SW used in the experiments
OS
OSSIE revision
Processor
RAM
Cache specifics
L1i=first level instruction cache
L1d=first level data cache
L2=second level cache
Experiment 3.2
Linux 2.6.9-34.EL
0.6.0
Pentium M 1.86GHz
1,5 GByte
Experiment 3.3
Linux 2.6.9-34.EL
0.6.2
Pentium M 1.86GHz
1,5 GByte
L1i=32kB
L1i=32kB
L2=2MB
L1i=32kB
L1d=32kB
L2=2MB
The second group of applications is a synthetic workload, also configured into applications with a variable number of components, and as described more specifically in section
3.3. A synthetic workload allows us to more easily vary the computational load of the components, and also the size of the transmitted packages between components, which will help
us understand the system behaviour.
The system workload, and how the workload is distributed among various processes and
modules in the system, may be measured in a variety of ways, ranging from instrumentation
(time measurements in the code) to various profiling and monitoring tools. Here we have
chosen to use the tools ’OProfile’ [4] and ’SYSSTAT sar’ [5]. OProfile is a statistical profiler
that samples the complete system including the kernel, shared libraries, and executables.
This makes it possible to assess the workload distribution over the total system, including
e.g. what portion of the CPU time was spent in the various modules of the applications and
what portion of it was used with CORBA. SYSSTAT sar is a performance monitoring tool
that works through collecting operating system information at intervals specified by the user.
SYSSTAT sar will be used to monitor the CPU utilization during execution of the different
sample applications.
3.2
Empirical Workload Assessment Using a Sample Waveform
Application With Low Computational Complexity: Stanag 4285
TX
In this experiment, we compare a 2-component, 7- and 11-component application implementation of the Stanag 4285 TX waveform, as illustrated in Figure B.2, in terms of CPU
workload. Each of the implementations contain the same functional processing code, such
that the amount of processing work that is carried out is the same in all three cases. Additionally we compare these application configurations against a non-SCA-based version, this
version also doing the same processing work.
For regulating the number of data packets processed per second (PR) in the application
chain, the data sink contains a packet rate regulator in the form of SW instructions that
perform a blocking read of a number of samples from an audio card. PR can then be set as
the number of samples read relative to the audio card sample rate.
The CPU workload is quantified using SYSSTAT sar (sar -u 40 5), and quantified as the
CPU % in the user application space, and as the sum of the CPU % in the user application
space and in the system space.
149
On Workload in an SCA-Based System with Varying Component and Data Packet Sizes
implementations contain the same functional processing code, such that the amount of processing work
that is carried out is the same in all three cases. Additionally we compare these application configurations
B. O N W
ORKLOAD
IN AN SCABASED
S YSTEM
WITHthe
Vsame
ARYING
C OMPONENT
AND
against
a non-SCA-based
version,
this version
also, doing
processing
work.
DATA PACKET S IZES
2 comp.
Stanag
4285 TX
Data
Sink
7 comp.
Data
Source
FEC
Encoder
Interleaver
FEC
Encoder
Interleaver
Data
Source
11
comp.
Float to
fixed
converter
Forwarder
Forwarder
Symbol
Mapper &
Scrambler
Symbol to
I/Q & TX
Filter
Symbol
Mapper &
Scrambler
Float to
fixed
converter
Data
Sink
Symbol to
I/Q & TX
Filter
Forwarder
Forwarder
Data
Sink
F IGURE B.2: Stanag 4285 TX implemented in three different application configurations: As two
SCA components Figure
(top), 2:as Stanag
7 SCA4285
components
(middle)
anddifferent
as 11 components
(bottom). The
TX implemented
in three
application configurations:
Asdata
two
SCA components (top), as 7 SCA components (middle) and as 11 components (bottom). The
sink contains a packet
rate regulator.
data sink contains a packet rate regulator.
For regulating the number of data packets processed per second (PR) in the application chain, the data sink
Figurecontains
B.3 shows
the
CPUin%theresults
the experiment,
when
running
theof apa packet
rateuser
regulator
form offrom
SW instructions
that perform
a blocking
read
a number of
plication samples
at 2395from
symbols
to 9,36
packets
per second),
an audio/ second
card. PR(which
can thencorresponds
be set as the number
of data
samples
read relative
to the audio card
rate. and 64000 symbols second. 2400 symbols/sec is the rate of the actual
and then sample
at 25600
waveform, the reason for additionally measuring at the two higher speeds is to get more
The CPU workload is quantified using SYSSTAT sar (sar –u 40 5), and quantified as the CPU % in the
significant
CPU
% readings.
user
application
space, and as the sum of the CPU % in the user application space and in the system space.
Figure303 shows the user CPU % results from the experiment, when running the application at 2395
symbols / second (which corresponds to 9,36 data packets per second), and then at 25600 and 64000
symbols second. 2400 symbols/sec is the rate of the actual waveform, the reason for additionally
25
measuring at the two higher speeds is to get more significant CPU% readings.
User CPU %
20
Non-SCA
Single component+sink
15
6-components+sink
10-components + sink
10
5
0
2395
25600
64000
Symbol rate
F IGURE B.3: User applications CPU workload in %, measured by SYSSTAT sar, for Stanag 4285
TX, for four different application configurations, each having the same processing functionality. The
processing symbol rates are also varied, by varying the number of packets processed per second (the
original Stanag
4285TX symbol rate is 2400 symb/sec). RTO IST-083 - Czech Republic - 21/04/2008 - 22/04/2008
17 - 4
We see from Figure B.3 that the user CPU workload is increased more than 2,5 times,
going from the non-SCA basecode to the 11-components implementation, and roughly 2,5
times from a two-component implementation to an 11-component one. When considering
user+system CPU%, Figure B.4, we see that this difference is even larger. Hence we may
conclude, for this processing code, that the CPU workload increases significantly as the
processing code is split into more components.
150
3. Workload Assessment Through Empirical Analysis
45
40
User + System CPU %
35
30
Non-SCA
25
Single component+sink
6-components+sink
20
10-components + sink
15
10
5
0
2395
25600
64000
Symbol rate
F IGURE B.4: Same as previous figure, but showing the User+System CPU workload in %.
3.3
Empirical Workload Assessment Using a Synthetic Load
In this experiment we compare a 2-component (W2), 3-component (W3), 5-component
(W5) and 11-component (W11) application, see Figure B.5. The computational load in each
case consists of 9 FIR-filter loads, each of B · N multiply-and-addition operations (plus the
additional necessary load and store operations). Here B is the packet (block) size in number
of floats (1 float=4 bytes), and N is the number of filter taps.
In this setup, we may easily vary B and the total functional load (≈ const · 9 · B · N ),
which we term W LF U N C . In the same way as in 3.2, we vary PR by using the audio card as
a packet rate regulator. For W2 the regulator is in the FTOT component, and for W3...W11
in the SRC component.
Due to subtle details as to how the functional code runs on the processor, the proportionality of W LF U N C to 9 · B · N is not exact for all B and N. In order for this not to confuse
our interpretation of measurements, we include reference measurements of W LF U N C where
appropriate, defining W LF U N C as the measured workload when running the functional code
as a standalone c-program.
As an initial experiment we want to verify that we see CPU workload differences between the various configurations also of the synthetic load. Since we have the liberty of
changing the load in the components, we do this for two different loads, N*B=20000 and
100000, keeping the packet size B at 2000 floats. The results are provided in Figure B.6.
For N*B=20000, we observe a CPU (U+S) workload ratio between the standalone program
and the 11-component SCA-based configuration of 1,94, hence we clearly observe workload
overhead effects. When N*B is increased 5 times, we still observe differences between the
configurations, but the same (U+S) workload ratio is now decreased to 1,17. Hence, under
these conditions, we see a clear workload ratio dependency on N*B.
Next we want to investigate the effects on workload from varying the packet size, while
keeping W LF U N C approximately constant, Figure B.7. Since the component functional
workload, as mentioned previously, is not ideally proportional to N*B, we have selected
combinations of N and B points that result in W LF U N C = 10 ± 0.3% as a reference. We
see again that dividing the processing into more components leads to increased processor
151
Due to subtle details as to how the functional code runs on the processor, the proportionality of WLFUNC to
9 ⋅ B ⋅ N is not exact for all B and N. In order for this not to confuse our interpretation of measurements,
we include reference measurements of WLFUNC where appropriate, defining WLFUNC as the measured
workload when running the functional code as a standalone c-program.
B. O N W ORKLOAD IN AN SCA- BASED S YSTEM , WITH VARYING C OMPONENT AND
DATA PACKET S IZES
W2:
FTOT
W3:
SRC
F1TO9
SNK
W5:
SRC
F123
F456
F789
SNK
F1
F2
F3
F4
W11:
SRC
F5
SNK
F6
F7
F8
F9
SNK
F IGURE B.5: The synthetic load on the system, in four different application configurations. In
Figure
5: The synthetic
system, ineach
four with
different
configurations.
In all
all configurations,
the functional
load isload
that on
of the
9 FIR-filters,
B x application
N multiplications
and
configurations,
the
functional
load
is
that
of
9
FIR-filters,
each
with
B
x
N
multiplications
and
additions.
additions.
As an
want to workload
verify thatdependency
we see CPUonworkload
between the various
workload.
Weinitial
also experiment
see a clear we
processor
B, with differences
the workload
configurations
also
of
the
synthetic
load.
Since
we
have
the
liberty
of
changing
the load in the
generally increasing with increasing B.
components, we do this for two different loads, N*B=20000 and 100000, keeping the packet size B at
2000 floats. The results are provided in figure 6. For N*B=20000, we observe a CPU (U+S) workload
ratio between the standalone program and the 11-component SCA-based configuration of 1,94, hence we
4 Workload
Low-Complexity
clearly observeAssessment
workload overheadthrough
effects. When
N*B is increased 5 times, we still observe differences
between the configurations, but the same (U+S) workload ratio is now decreased to 1,17. Hence, under
Analytical
these conditions,Models
we see a clear workload ratio dependency on N*B.
4.1
A Simple Lower Bound Model
Model
This model is an optimistic one, and will serve as a lower bound on the workload in the SCAbased system. The model will tell us how much of the additional workload on the processor,
due to increased granularity, can be explained by accounting for the added workload due to
the CORBA-based communication, i.e. the CORBA client, data format conversions, ORB
activities, data transport, and CORBA servant.
In this model we will assume no loss due to context switches. We will assume that all
17 - 6
RTO IST-083 - Czech Republic - 21/04/2008 - 22/04/2008
the functional
code in the components, and all other activities
on the processor, execute in
sequence without any performance losses due to there being several processes competing
for the resources, and we will assume that the processor goes into idle state when it has
processed one data package through the chain of components. Specifically we here assume
that separating the functional code into more components does not lead to an increased
amount of cache misses and hence does not lead to a lower average memory fetch/store
speed. In the model, we have chosen to separate out the data conversion to and from the
’FloatSequence’ type that is used for the port data packet communication, as we found that
to be important workload contributors.
Under these conditions we may write the CPU workload in % for our synthetic applica152
4. Workload Assessment through Low-Complexity Analytical Models
CPU Workload versus Configuration
BLSZ=2000, PR=40
50
Ratio, user: 1,10
Ratio, user+syst: 1,17
45
40
CPU WL [%]
35
30
System
Ratio, user: 1,67
Ratio, user+syst: 1,94
25
User
20
15
10
5
0
FUNC
N=10
W2
N=10
W3
N=10
W5
N=10
W 11 FUNC
N=10 N=50
W2
N=50
W3
N=50
W5
N=50
W 11
N=50
Configuration
F IGURE B.6: User and system CPU workload in %, measured by SYSSTAT sar, for the four synthetic
waveform application configurations, and compared to the standalone c-implementation, FUNC. The
packet size is 2000 floats, and N=10 for the 5 leftmost measurements, N=50 for the other 5 measurements. PR=40.
tion with M components (including SRC and SNK) as
100·P R ·[9tCL +tSRC +tSN K +(M−1)tpacket +(M−1)tT S +(M−2)tT F ]
(B.1)
P CR
where tCL is the number of processor cycles to process one unit component load CL, tSRC
is the number of processor cycles to process the functionality in the SRC component, tSN K
is the number of processor cycles to process the functionality in the SNK component, tT S is
the number of processor cycles to convert array of floats data to its FloatSequence representation, tT F is the number of processor cycles to convert the FloatSequence representation
to array of floats data, tpacket is the number of processor cycles to transfer one data packet
between components, and P CR is the cycle rate of the processor. (The (M−2) is due to
there not being any conversion from the FloatSequence type in the SNK component in our
specific case.)
W Li% =
Estimation of Parameters
Allthough it is possible to calculate estimates of the above parameters based on the source
code of our system and processor specifics, it is a lengthy exercise and it is also difficult to
get good accuracy. For our purpose of describing the workload overheads in the system, it
suffices to instead measure the specific parameters in (1), as follows:
• 9 · tCL is measured, using OProfile, as a function of N and B, by running the functional
code as a standalone c-program
• tSRC ,tSN K ,tT S ,tT F and tpacket : Measured using both SYSSTAT Sar and OProfile, with
a test application consisting of merely one source and one sink component, see Figure
B.8.
153
B. O N W ORKLOAD IN AN SCA- BASED S YSTEM , WITH VARYING C OMPONENT AND
DATA PACKET S IZES
CPU Workload versus Packet Size
35
30
FUNC U
FUNC U+S
CPU WL [%]
25
W3 U
W3 U+S
20
W5 U
W5 U+S
15
W11 U
W11 U+S
10
W2 U
W2 U+S
5
0
0
10000
20000
30000
40000
50000
60000
70000
Packe t Size
F IGURE B.7: User(U) and User+System (U+S) CPU workload in %, measured by SYSSTAT sar,
as a function of packet size in # of floats, for the four synthetic waveform application configurations
and with the standalone C-program as a reference (FUNC). FUNC has been adjusted, by selecting
combinations of B and N, to approximately 10% workload (see FUNC graph).
– tSRC ,tSN K : Measured using OProfile, with no conversion to/from FloatSequence taking place in the source- and sink-components, as a function of B,
and excluding all ORB-related processor cycles
– tT S ,tT F : Measured with OProfile and sar, with a large number of conversions to
FloatSequence taking place in the source component and/or and a large number
of conversions from FloatSequence taking place in the sink component
– tpacket : Measured with OProfile as the sum of the processor cycles judged to be
ORB-related. Estimated with SAR with no conversion to/from FloatSequence
and assuming tSRC and tSN K ≈ 0.
CORBATSTSRC
CORBATSTSINK
CORBATEST:
F IGURE B.8: Test application for measurements of parameters in equation (B.1)
Comparison with Measured WL
Figure B.9 shows a comparison of measured WL (user %) for the W11 configuration
as a function of B (same data as in Figure B.7), and W Li% , when using the OProfile
tSRC ,tSN K ,tT S ,tT F and tpacket estimates from table 2, and setting M=11 in the model. As
expected W Li% underestimates the real workload, in particular for high B’s, and in particular as we have not accounted for any context switching. Notably though, W Li% predicts the
major part of the observed WL overhead, particularly for the low B region.
154
4. Workload Assessment through Low-Complexity Analytical Models
TABLE B.2: The parameters in model (B.1) measured with OProfile and SYSSTAT sar, using the
test application of Figure B.8.
User
User
System
System
4.2
Estimate
based on
’sar’ measurements:
Estimate
based on
OProfile
measurements:
Estimate
based on
’sar’ measurements:
Estimate
based on
OProfile
measurements:
tSRC
tSN K
tT S (B)
tT F (B)
tpacket (B)
Assumed 0
Assumed 0
14, 5·B
10, 6·B
79200 + 5, 5·B
650
200
14, 6·B
10, 6·B
80700 + 5, 5·B
Assumed 0
Assumed 0
≈0
≈0
160000 + 23, 6·B
≈0
≈0
≈0
≈0
150000 + 23, 0·B
A Simple Model Including Context Switching
Model
A context switch refers to the switching of the CPUs execution from one process or thread
to another.
Each SCA component will run as a process of its own. The switching of which process
is the active executing one, is determined by the scheduling algorithm of the OS, in our case
by the Linux 2.6.9 scheduler.
When a context switch occurs, this has a workload cost for the processor. We refer to
the direct cost of the context switch as the cost directly associated with almost every context
switch, as the saving and restoring of processor registers, the execution of the OS scheduler,
reloading of TLB entries and the flushing of the processor pipeline [6]. The indirect cost
is related to the cache sharing between processes [6], which may cause parts of the cache
having to be stored onto lower-level cache or memory, and new values to be read into the
cache, i.e. a ’rewarming-up’ of the caches to the process or thread being executed.
The costs due to context switching is an added workload to that of equation (1), hence
we write the workload including context switching W LCS% as
W LCS% = W Li% +
CSR · (tCSD + tCSI )
· 100
P CR
(B.2)
where CSR is the rate of context switches, tCSD is the direct cost in number of processor
cycles due to a context switch, whereas tCSI is the indirect cost, in processor cycles, of the
context switch.
Estimation of Parameters
155
B. O N W ORKLOAD IN AN SCA- BASED S YSTEM , WITH VARYING C OMPONENT AND
DATA PACKET S IZES
Comparison of W 11 Measured Data and Simple Lower Bound Model
24
22
WL User [%]
20
18
16
14
12
10
0
1
2
3
B
4
5
6
x 10
4
F IGURE B.9: Comparison of measured (upper graph) WL (user %) for W11 , and W Li% (lower
graph), when using the OProfile tSRC ,tSN K ,tT S ,tT F and tpacket estimates from table 2 when calculating W Li% as a function of the packet size. FUNC has been adjusted to approximately 10% for all
B.
As our interest here is mainly that of verifying that B.2 provides useful explanations of our
problem, we will here only use course estimates of the parameters in B.2:
• CSR is a function of both the scheduling algorithm and the processor load. For the
comparison with the W11 measured data in Figure B.9, we here measure CSR directly,
using ’vmstat’ in Linux. We use a constant CSR of 1300 as a course average of
observed CSR’s for the various B’s in Figure B.9.
• tCSD is assumed to be a constant, and can be measured through methods described in
the literature. In [6], tCSD is measured to be 3,8µsec for a dual 2.0GHz Intel Xeon
system. We will use 5µsec or 9300 cycles as a course estimate for our system.
• tCSI is dependent on the L1 and L2 caches of the processor. In a simplified view, if
the sum of the data and instruction code areas that we are addressing are beyond the
size of the L2 cache, the system will increasingly need to write data buffers to memory
and read data buffers and instruction code from memory, at context switches. If this
is the case, we assume that we will need to store our changed data of the past process
to memory, and then read in the data and instructions of the next process that is to
execute. We make a course estimate tCSI for the W11 case as a fraction c multiplied
by the cycle count for writing one component data buffer of size B floats to memory,
reading two buffers of size B from memory and reading 10kBytes of instructions, for
the case that the sum of the buffers and code of all 11 components exceed the L2 size.
156
5. Conclusions
The blue line in Figure B.10 is W LCS% , for W11 and the same general conditions as in
figure B.9, and when accounting only for CSR and tCSD . We see that this provides a better
agreement with the measured data for W11 than using only the simple lower bound model,
in particular for the left part of the curve (low B).
When using a tCSI based on c=1 (full buffers written to and read from memory at each
context switch) we vastly overestimate the processor workload. When setting c=0.02 we
get the dashed black curve in Figure B.10, which, given our many assumptions, should be
regarded as an example of CS model output only. Interestingly we see however that the
exhaustion of the L2 explains the deviation of the measured data from that of the Simple
Lower Bound model for high B’s.
Comparison of W 11 Measured Data and CS Model with Approximate Parameters
24
22
WL User [%]
20
18
16
14
12
10
0
1
2
3
B
4
5
6
x 10
4
F IGURE B.10: Comparison of measured (red diamond graph) WL (user %) for W11, and W LCS%
accounting for direct cost only (blue solid line), with same conditions as in Figure B.10. The black
dashed graph is an example of a W LCS% graph including both direct and indirect CS cost under
certain assumptions, see specific conditions in the text.
5
Conclusions
We have used empirical analysis and simple analytical models to understand the effects of
component granularity in an SCA-based system, when the granularity is such that several
components are deployed on the same CORBA-capable processor. For the empirical analysis, we have used the OSSIE CF from VirginiaTech, and omniORB. We have used variable number of components implementations of a real TX waveform processing, the Stanag
4285TX, and we have also used a synthetic waveform where we have been able to vary both
the component workloads and the data packet sizes.
157
R EFERENCES
When executing the same total functional processing work, but with a varying number of
SCA-based components, we observe that the processor workload increases as the number of
components increases, and increasingly so for decreasing total functional processing work
as well as for increasing data packet sizes. Hence the scalability and reusability benefits that
result from implementing the SDR-application with a high number of components, must
be balanced against the processing efficiency loss that occurs when having to run several
components on the same processor.
We have proposed two simple models that explain the major effects of the processor
workload overheads in an SCA-based system, and that with proper determination of their
parameters may be used to predict actual workload in a system.
6
Acknowledgements
The first author would like to thank Torleiv Maseng at FFI (the Norwegian Defence Research
Establishment) for his encouragement and for valuable inputs and advice, and would also
like to thank Sarvpreet Singh at FGAN for his inputs on how to design the packet rate
regulator.
References
[1] P. J. Fortier and H. E. Michel, Computer Systems Performance Evaluation and Prediction. Amsterdam: Digital Press, 2003.
[2] VirginiaTech. OSSIE development site for software-defined radio. (Dec. 2007).
[Online]. Available: http://ossie.wireless.vt.edu/trac
[3] omniORB. (Feb. 2008). [Online]. Available: http://omniorb.sourceforge.net
[4] OProfile - A System Profiler for Linux. (Feb. 2008). [Online]. Available:
http://oprofile.sourceforge.net
[5] SYSSTAT. (Feb. 2008). [Online]. Available: http://pagesperso-orange.fr/sebastien.
godard/
[6] C. Li, C. Ding, and K. Shen, “Quantifying the cost of context switch,” in ExpCS ’07:
Proceedings of the 2007 workshop on Experimental computer science, June 2007.
158
Paper C
Dynamic Frequency Broker and Cognitive Radio
Torleiv Maseng and Tore Ulversøy
Presented at
The IET Seminar on Cognitive Radio and Software Defined Radios: Technologies and
Techniques
London, UK
Sept. 18, 2008
Published online at http://ieeexplore.ieee.org
Summary of this candidate’s contribution to this paper:
This candidate has contributed with the major parts of
Section 1 Introduction
Section 2.1 Traditional Frequency Allocation and Assignment
Section 2.2 Cognitive Radio
Section 2.3 IEEE 802.22
and has contributed with parts of
Section 2.4 Coordinated Dynamic Spectrum Access
Section 3 The Dynamic Frequency Broker.
The concept idea is by T. Maseng.
159
Abstract
Traditional frequency allocation and assignment are relatively slow processes that
are inadequately suited for satisfying and exploiting the dynamic spectrum needs of various communication and broadcast systems. This leads to poor average utilization of the
spectrum, yet with the demand for assigned spectrum in some bands and locations exceeding availability. Cognitive Radio (CR) attempts to increase the spectrum utilization
by sensing and reusing available spectrum bands, e.g. in the 802.22 standard the CR
system attempts to reuse spectrum bands allocated to TV transmitters. A fundamental
problem is, however, that although the CR does not detect any transmitter in a specific
band, e.g. due to how the CR is located in the terrain, it may still create disturbance for
a receiver, commonly known as a hidden node problem. To allow improved utilization
of the spectrum resources, but at the same time avoiding that this improvement comes
at the expense of interfering hidden nodes, we describe the Dynamic Frequency Broker
(DFB) system. The DFB acts as a local computerized frequency coordination authority.
It keeps a complete list of frequency assignments within an area and keeps an updated
terrain propagation path loss model of its area. It assigns frequencies on a temporary
basis based upon transmitted signal power, spectral density, receive and transmitter antenna properties and the required received signal to interference level. It receives by
Internet and radio, information about the active transmitters, and revokes permissions
to transmit if they are idle over a certain period.
c
2008
IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained
for all other uses, in any current or future media, including reprinting/republishing this material for
advertising or promotional purposes, creating new collective works, for resale or redistribution to
servers or lists, or reuse of any copyrighted component of this work in other works.
Link to Paper: Dynamic Frequency Broker and Cognitive Radio
161
1. Introduction
1
Introduction
The increased demand for wireless communications has caused scarcity of available spectrum in some frequency bands and geographical regions. At the same time, studies show
that the actual utilization of the same frequency bands, measured over frequency and time,
on average tend to be low. [1] refers to a study for frequencies below 3 GHz and concludes
that on the average only about 5.2% of the spectrum in the US is actually in use in any given
location and at any given time. Traditional frequency allocation and assignment typically
assign spectrum bands for long periods of time (e.g. years), and hence do not adapt to fast
and medium term dynamics and geographical variations of spectrum demands.
Cognitive Radio (CR) has been proposed as a means to increase the utilization of the
spectrum. The CR may sense its environment, adapt its communication properties and spectrum usage, and learn from its decisions.
A fundamental problem however is that even if the CR, using its own sensing or also
distributed sensing employing other CRs, detects a certain part of the spectrum as unused,
its transmission may still create disturbance for some receivers, depending on geographical
location of transmitters, receivers and the CRs.
In this paper, as a proposal for how to deal with the mentioned problem areas, we outline
the Dynamic Frequency Broker (DFB) system.
2
2.1
Background
Traditional Frequency Allocation and Assignment
Traditional frequency assignment is often referred to as ”Command and Control Frequency
Assignment”. An assignment (of a radio frequency or a radio frequency channel) is defined
as an ”Authorization given by an administration for a radio station to use a radio frequency
or radio frequency channel under specific conditions” [2]. Each country will have one authority, or alternatively more than one coordinating authorities, for assigning frequencies.
The authority determines the availability of the desired frequency or spectrum sub-band,
the consistency with regional and international agreements and verifies that the assignment
will have a low probability of excessive interference with other users. The frequencies or
spectrum sub-bands are typically assigned for a significant amount of time. As an example
in Norway this varies from 5 to 20 years for some services, or the assignment applies as
long as the frequencies are needed or until the authority determines that they are not longer
available for the particular user or communications provider [3].
In border areas, or with long-ranging communication, the authority additionally will
verify that there will not be interference with neighbour-country systems, through interauthority coordination. Guidelines for such frequency coordination at border areas exist, as
an example see [4].
Frequency assignments in particular shall not create harmful interference for assignments in neighbour states that are in accordance with ”The Table of Frequency Allocations”
as published in [2] by the International Telecommunication Union (ITU) and incorporating
the decisions from the World Radiocommunication Conferences. An allocation (of a frequency band) is defined as an entry in this table [2]. The table lists frequency intervals and
allocated services in these intervals in each of ITUs geographical regions.
163
C. DYNAMIC F REQUENCY B ROKER AND C OGNITIVE R ADIO
The traditional type of frequency assignment is beneficial for the licensed spectrum users
in that it provides a high availability of their assigned spectrum resources and a low level of
interference.
Some frequency management authorities have to some extent responded to the higher
degree of dynamics of telecommunication, e.g. by providing spectrum licenses instead of
more specific system licenses, by allocating unlicensed bands where several user groups
may coexist, by having spectrum auctions, and by allowing sub-leasing of spectrum.
The overall picture of frequency assignment is still a very static one, assignments are
long-term and they are in accordance with allocations that are very-long-term. Spectrum
is not released when not being used for short- or medium-term periods. Also, since the
assignment process involves manual administration and decision processes, the applicant
often experiences a time-delay in order to obtain the specific frequency assignment.
2.2
Cognitive Radio
A CR has the ability to sense and to some extent reason about its environment and adapt its
communication properties such as protocols, modulation, transmission power and spectrum
usage to its context, where the context includes the spectral environment it observes. A CR
may also learn from the results of its adaptations. CRs may have different levels of cognitive
capabilities, e.g. [5] defines cognition tasks in terms of nine levels of capabilities. Also CRs
may have different levels of adaptation possibilities of their communication properties. The
term Cognitive Radio is attributed to Joseph Mitola III, and described in [6].
CRs are seen as a means to increase spectral utilization, in that CRs may detect, or
collect information about, unused or underutilized spectrum and may use this spectrum on
an ad hoc basis, possibly co-existing with other services in the same spectrum band.
A main CR technical problem is that of hidden nodes; even if a CR, here termed A,
detects that a certain portion of the spectrum is available, some radio station B may still
receive interference from A. This may e.g. be due to B being undetectable due to its transmission level, or due to B being a receiver with a clear transmission path to its transmitter
companion C, C however not being detectable from A. The hidden node problem may be
compensated for somewhat, though not eliminated completely, by having additional spectral
sensing equipment at elevated positions, or by exchanging spectral sensing information in a
network of CRs, also known as distributed spectrum sensing [7].
Significant reasoning capability is required by the CR for making intelligent autonomous
decisions in locations where the CR coexists with legacy systems and networks of other CRs,
and where at the same time the topography is complicated, implying a high probability for
hidden nodes.
2.3
IEEE 802.22
The IEEE 802.22 is referred to as the first wireless standard based on CR [1, 8]. The IEEE
802.22 Working Group develops Physical (PHY) and Medium Access Control (MAC) layers
for a CR-based Wireless Regional Area Network (WRAN). These 802.22 devices are to coexist, as unlicensed units, in spectrum bands allocated to a broadcasting (television) service,
wireless microphones and some other Private Land and Commercial Mobile Radio Services
(PLMRS/CMRS). The 802.22 devices form point-to-multipoint networks, where one base
station (BS) manages associated Consumer Premise Equipment (CPEs).
164
3. The Dynamic Frequency Broker
The 802.22 system takes great care in avoiding disturbance of other licensed users,
achieved through e.g. the combination of measurements both at the BS and at the CPEs
to obtain reliable spectrum occupancy figures. However there is still a small risk of disturbing undetected hidden nodes, e.g. wireless microphones.
2.4
Coordinated Dynamic Spectrum Access
As an alternative to the CRs largely uncoordinated, opportunistic approach to spectrum sharing, several approaches for Coordinated Dynamic Spectrum Access (CDSA) have been proposed. These have similarities to the DFB concept proposed in this paper.
In [9] a protocol named Dynamic Spectrum Access Protocol (DSAP) is proposed based
upon timebound frequency leases, and the concept of commands for requesting frequencies,
discovering possibilities and frequency offers are suggested. The DSAP commands are contained in messages that are interchanged between DSAP clients and DSAP servers, possibly
through DSAP relays for range extension. These ideas are implemented in an experimental
network using WLAN in which all terminals have an IP access link with MAC addresses as
unique identifiers. While DSAP is targeted at spectrum management in networks in limited
geographic areas and with the possibility of short-term leases, it may operate with and be
part of a larger spectrum management system.
In [10] a position statement for the concept Coordinated Dynamic Spectrum Access Networks is presented. The frequency spectrum is managed by a Spectrum Broker that grants
timebound leases of spectrum within a Coordinated Access Band (CAB) to individual networks or users in a geographical region. The proposed implementation architecture for
statistically multiplexed coordinated access to the CAB is named DIMSUMnet. The paper contains an interesting discussion on how to release spectrum already allocated to an
operator in which cost, sticky allocations, access fairness and user needs are key words.
In [11] algorithms for spectrum allocation are presented. These are based upon maximizing the service provided considering the mutual interference between the terminals.
The European Union has taken an initiative to define a new spectrum policy WAPECS
(Wireless Access Policy for Electronic Communications Services) to enable the flexible
implementation of new concepts, like CDSA. In the following we will describe our CDSA
approach.
3
3.1
The Dynamic Frequency Broker
Description
The Dynamic Frequency Broker (DFB) is a computer automated service which is responsible for assigning frequencies to radio nodes within its geographical area. In order to take into
account consequences to radio nodes in other geographical areas, and consequences in own
area due to outside radio nodes, DFBs coordinate their assignments by forwarding and dealing with frequency requests as needed. The DFBs are organized in a hierarchy [10], with the
national frequency regulatory authority and finally ITU at the top. Hence the assignments
issued by a DFB must be coordinated with other DFBs at the same level, and any conflicts
shall be resolved by pushing the request one level higher in the hierarchy. Assignments must
be in accordance with policies that are enforced from the top level DFB.
165
C. DYNAMIC F REQUENCY B ROKER AND C OGNITIVE R ADIO
F IGURE C.1: The DFBs forms a hierarchy of nodes, nationally in Norway, but also internationally.
A radio node in this context has a wide definition, ranging all the way from end user radio
equipment, through base stations, broadcast transmitter and any entity high level aggregation
of related radio devices which acts as an administrator of those devices. Each radio node
will need a way to identify itself to the DFB, e.g. by an ID number.
The network of DFBs form a distributed computer system, which responds in a coordinated manner to frequency requests. The frequency requests, and corresponding responses
back to radio nodes, are proposed to be communicated through Internet in the cases where
the nodes have a direct Internet connection, and through dedicated radio frequencies using
DFB slave stations that relay the messages into corresponding Internet communication. A
radio node that needs a transmit frequency or range of frequencies to operate in, shall issue a
request to the DFB. The request or ChannelRequest [9], will need to contain all the information that the DFB needs in order to deal with the request, including some kind of radio node
identity information, desired frequency of operation, modulation and bit rate capabilities,
and all other parameters needed in order to evaluate interference with other systems. Based
on this information, together with the accumulated information in the DFBs database of surrounding receivers and their susceptibility to interference, and together with any responses
from neighbour DFBs, the DFB will evaluate the request based on normal frequency coordination procedures and using a terrain propagation model or Radio Map [9]. The propagation
model should be stochastic, enabling interference assessments at defined confidence levels.
The DFB will respond with the request granted or rejected, or it will respond with a modified permission, e.g. a narrower portion of the spectrum than requested. The permissions
are given as leases [9, 10] i.e. they expire after a defined time and when they are reported
unused by the radio nodes.
The DFB shall have the possibility of polling radio nodes within its geographical control
area. The radio nodes shall then report their measured interference levels, which the DFB
may use as learning input to correct its propagation models for the area, and also the DFB
166
3. The Dynamic Frequency Broker
F IGURE C.2: An example DFB scenario. Line colours: Green: Fixed line Internet access continuously Red: Internet access via radio service channels Black: Prior service negotiation via Internet
may use the measurements to detect unauthorized interference. For the same two purposes
the DFB may also have its own spectrum monitoring stations within its geographical control
area.
The service interface of the DFB is proposed to be implemented as a Web Service, using
Extensible Markup Language (XML) -based messages over SOAP (message protocol) and
Hypertext Transfer Protocol (HTTP). Although this will make the messages longer than
with many other technologies, the main advantage with Web Services over HTTP is that
it simplifies communication through firewalls. Web Services will also allow a radio node
to easily discover its nearest DFB and get a description of its service from a registry, e.g.
from a Universal Description, Discovery and Integration (UDDI) service. Furthermore, the
loose coupling of Web Services means that DFBs can easily be added and removed without
extensive reconfiguration of the rest of the system.
Correspondingly, the polling interface of the radio nodes can also be implemented using Web Services. Since Web Service interfaces are described through a WSDL-file (Web
Service Description Language), implementers of radio nodes only need to download the corresponding WSDL-file and generate the Web Service interface from this description. This
approach both simplifies the development of the radio nodes and ensures a standardized
functionality of the polling interface.
DFB assumes that a radio node has a certain capability of online communication with
the DFB, in order to be able to request renewal of spectrum leases and in order to correctly
respond to requests and responses from the DFB. With units lacking this online communication capability, the user who is responsible for the node will need to perform the spectrum
lease request manually. An example of this may be a wireless microphone, a TV set (not
connected to a media computer) or a cordless baby monitor consisting of an inexpensive receiver and transmitter connected to a microphone. When these services are needed, the user
must make a request for frequencies. He will need to provide information to the DFB like:
hardware capabilities of the electronics (like power, receiver noise figure, antenna details,
frequency range, etc.), the time period that the service is needed for (lease period [9]) and the
167
C. DYNAMIC F REQUENCY B ROKER AND C OGNITIVE R ADIO
position of the receiver and transmitter. It is practical to provide a software client to be run
on the computer making the request. This software client is tailored to the electronics and
sold with it. When a frequency assignment is issued by the DFB, this must be implemented
on the electronics. This may be automatically done by connecting the electronics for a short
moment to the computer or to Internet, rather than an operator assisted implementation.
Since the hierarchy of DFBs will have a full overview of frequency leases at any one
time, it may also form part of an infrastructure for invoicing spectrum use. Having payments
for frequency leases will generate an incentive for handing back unused frequencies, and
therebye contribute to improved actual utilization of the spectrum.
3.2
Analogy with address shortage of IP version 4
The DFB approach is analogous to and has some of its inspiration from the solutions for
the address shortage in Internet Protocol (IP) version 4. E.g. in the same way as a local
Dynamic Host Configuration Protocol (DHCP) server issues local IP addresses to be used in
a local domain only, the DFB will issue frequencies to be used in a local geographical region
only. The DHCP server issues the addresses as time-limited leases that need to be renewed,
and the DFB issues time-limited leases of frequencies [9].
Also in the same way as an Internet user may receive a session dependent dynamic IP
address from his Internet Service Provider (ISP), which applies for the particular session,
but probably he will get a new address for the next initiated session, the DFB will respond
with a particular frequency or spectrum band as a time-limited lease, and the radio node
may get the same one or another one at the next requested frequency lease. Once that a
new frequency is assigned because the old lease expired, the DFB will upon request from a
receiver node, inform the node about the new assignment, similar to Dynamic Name Server,
based upon the previous radio nodes ID number.
3.3
DFB in relation to CR
The DFB concept will benefit from the radio nodes having CR features, such as sensing,
reasoning and adaptive capabilities. As an example, if the node that the DFB receives a
request from is a transmitter it will be advantageous if the node knows how much spectrum
it desires, how much it will need as a minimum and preferences as to what part of the
spectrum it can use. Also the node will need to be able to interpret and adapt to the DFBs
response.
DFB however offloads the radio nodes from the burden of autonomously deciding which
spectrum bands to utilize, hence the radio nodes may be less complex than full CR. Radio nodes will need the additional functionality required to send Web Service requests and
receive responses from the DFB.
When using CR to increase spectrum utilization, changes in frequency allocation policies
need to be propagated all the way to the radio nodes, in order for the nodes to make the
correct autonomous decisions in accordance with these policies. With the DFB approach,
the policies need only be updated at the top level of the DFB hierarchy, and will subsequently
propagate down into the whole system of DFBs.
168
3. The Dynamic Frequency Broker
3.4
DFB Challenges
The improvement of spectrum utilization achieved by DFB is based on that radio nodes
only ask for the spectrum that they need, in only the geographical regions that they need
it, only for the time that they need it and that they hand back permissions whenever they
are not needed, and additionally that the DFB may get information on unused permissions
from spontaneous or requested spectrum measurements from other radio nodes. The level of
improved utilization that can be achieved depends on the adherence to the above and on the
granularity of the lease time. Obviously the DFB concept does not allow for utilization of
unused frequencies that are within a lease period and where the frequencies have not been
handed back to the DFB or where the permissions have not been detected as unused, i.e. the
most rapid spectrum use fluctuations are not exploited.
In addition to the legitimate users of the spectrum, and the interference that the DFB can
predict from its propagation models, there will be additional sources of interference. Examples of such sources are unauthorized users of the spectrum, man-made radio-frequency
noise, non-man-made noise and interference from distant users due to particular propagation
conditions. The DFB may receive input on such interference by spontaneous input from radio nodes, by polling radio nodes and by polling its own spectrum monitoring stations. The
DFB may subsequently carry out corrective actions in the form of marking frequencies as
temporarily blocked and moving affected radio nodes to other undisturbed parts of the spectrum. This type of conditions and this type of actions will however be very challenging for
the DFBs reasoning capability and its ability to reschedule the usage of the spectrum. It is
expected that complicated cases may require manual intervention and be reported upwards
in the hierarchy, where actions may be taken to identify and locate the interference, just like
today.
DFB has some obvious security related challenges, much in analogy to those of other
types of distributed network systems. We will not go into discussing the solutions to these
challenges here, only point at their existence and note that they need to be addressed. Some
of these challenges are:
• computer attacks on the DFB by overwhelming it with requests (denial-of-service
attack)
• attacks draining the DFB of available spectrum
• radio node ID and authentication issues
• even if the DFB will not provide more than sufficient data to a radio node upon request,
systematic computer generated requests may reveal information which may be used
for unauthorized use of spectral and radio node ID information to generate an attack
on a specific radio service.
DFB also has obvious implementation related challenges, e.g. the determination of the
optimum lease period and the amount of computational capacity required at the DFBs. We
have here chosen to keep the description at an overview level, the system has not been
prototyped and we have not attempted to describe solutions to the detailed implementation
issues.
169
R EFERENCES
4
Conclusions
We have benefited from the ideas of CR and the work of IEEE 802.22 and tried to generalize
the frequency allocation procedure to be useful for any radio service which need a transmit
frequency. We hope that these ideas may prove useful for future standardizations efforts to
make frequency bands available for new exciting services and make transmit terminals more
civilized and empathic while trying to respect that everybody has a right to speak as long as
they do not bother others.
References
[1] C. Cordeiro, K. Challapali, D. Birru, and S. Shankar, “IEEE 802.22: The first worldwide wireless standard based on cognitive radios,” in First IEEE International Symposium on New Frontiers in Dynamic Spectrum Access Networks, 2005. DySPAN 2005.,
pp. 328–337, Nov. 2005.
[2] International Telecommunication Union, “Radio Regulations, Edition of 1998,” Tech.
Rep., 1998.
[3] Norwegian Post and Telecommunications Authority. Post- og teletilsynet. (Oct. 2006).
[Online]. Available: http:///www.npt.no
[4] The Asia-Pacific Telecommunity, “APT Recommendation on Guidelines for the Frequency Coordination for the Terrestrial Services at the Border Areas between Administrations,” Tech. Rep., Feb. 2007.
[5] J. Mitola III, “Cognitive Radio An Integrated Agent Architecture for Software Defined
Radio,” Doctoral dissertation, Royal Institute of Technology (KTH), SE-164 40 Kista,
Sweden, May 2000.
[6] J. Mitola III and G. Q. Maguire, Jr., “Cognitive radio: Making software radios more
personal,” IEEE Personal Communications, vol. 6, no. 4, pp. 13–18, Aug. 1999.
[7] Q. Peng, K. Zeng, J. Wang, and S. Li, “A distributed spectrum sensing scheme based on
credibility and evidence theory in cognitive radio context,” in IEEE 17th International
Symposium on Personal, Indoor and Mobile Radio Communications, 2006, pp. 1–5,
Sep. 2006.
[8] C. Cordeiro, K. Challapali, D. Birru, and S. Shankar, “IEEE 802.22: An Introduction to
the First Wireless Standard based on Cognitive Radios,” Journal of Communications,
vol. 1, no. 1, pp. 38–47, Apr. 2006.
[9] V. Brik, E. Rozner, S. Banerjee, and P. Bahl, “DSAP: A protocol for coordinated spectrum access,” in First IEEE International Symposium on New Frontiers in Dynamic
Spectrum Access Networks, 2005. DySPAN 2005., pp. 611–614, Nov. 2005.
[10] M. Buddhikot, P. Kolodzy, S. Miller, K. Ryan, and J. Evans, “DIMSUMnet: New
directions in wireless networking using coordinated dynamic spectrum,” in World of
Wireless Mobile and Multimedia Networks, 2005. WoWMoM 2005., pp. 78–85, June
2005.
170
References
[11] A. P. Subramanian, H. Gupta, S. R. Das, and M. M. Buddhikot, “Fast spectrum allocation in coordinated dynamic spectrum access based cellular networks,” in 2nd IEEE
International Symposium on New Frontiers in Dynamic Spectrum Access Networks,
2007. DySPAN 2007., pp. 320–330, Apr. 2007.
171
Paper D
On Spectrum Sharing in Autonomous and Coordinated Dynamic Spectrum Access
Systems: A Case Study
Tore Ulversøy, Torleiv Maseng and Jørn Kårstad
Presented at
Wireless VITAE ’09
Ålborg, Denmark
May 17-20, 2009
Published online at http://ieeexplore.ieee.org
Summary of the candidate’s contribution to this paper:
This candidate has contributed with the major parts (> 95%) of the article text. Also, the candidate
has contributed with the conceptual ideas for how to avoid the starvation of links and also has developed the calculation model and programmed it in Matlab. Further, the candidate has contributed
with the graphs / figures except the radio deployment ones. (The radio deployment figures (D.3,
D.7 and D.11) and the FEFAS static radio link calculations (hij -parameters) have been made by J.
Kårstad).
173
Abstract
As an introduction, different Dynamic Spectrum Access architectural concepts are
reviewed and three idealized simplifications described: The autonomous, distributed
and centralized architectural model. Based on an information-theoretic n-link interference model the Global Optimum (GO) and Competitive Optimum (CO) (selfish) power
density assignment is discussed. A detailed case study is conducted for a numerical
example with 2 links and 2 spectrum segments, comparing the GO and CO solutions,
for a low interference, high interference and unsymmetrical interference deployment.
It is shown that for conflicting links, with high symmetrical or unsymmetrical interference, the CO solution may become significantly worse than the GO one. We relate
these results to the autonomous, distributed and centralized architectural models, and
make suggestions for practical policies, information exchange and how to improve the
CO solution in the direction of the GO one.
c
2009
IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained
for all other uses, in any current or future media, including reprinting/republishing this material for
advertising or promotional purposes, creating new collective works, for resale or redistribution to
servers or lists, or reuse of any copyrighted component of this work in other works.
DOI: 10.1109/WIRELESSVITAE.2009.5172523
175
1. Introduction
1
Introduction
As frequently pointed to by many authors, the present ’static’ long-term spectrum assignments by regulatory authorities lead to low efficiency in utilization of the spectrum. While
most of the spectrum is assigned to users, measurements show that many frequency bands
’are not in use or are only used part of the time’ [1]. At the same time, the demand for
communication capacity, in particular for mobile two-way data communication, is continuing to grow. This is also true in the military domain, where capacity demand is growing for
example for the purposes of full-motion video from Unmanned Aerial Vehicles (UAVs) and
for high-capacity ad-hoc networks in network-centric operations.
A more dynamic approach to spectrum access has been pointed to as a solution to
the above mentioned issues, and a number of such Dynamic Spectrum Access (DSA) approaches have been suggested, with Cognitive Radio [2, 3] (CR) being the approach that is
attracting the most attention.
In this article, we provide a taxonomy of the various DSA architectural concepts. We
then discuss optimum information capacity based on a simple n-link interference model. In a
simple example with two radio links and two frequency segments, we analyze in detail how
various interference and background noise cases will be handled, and analyse the difference
between a centrally computed Global Optimum (GO) solution relative to a selfishly computed Competitive Optimum (CO) solution. These results are then related to the idealized
autonomous, distributed and centralized architectural models.
2
2.1
Background: DSA Concepts
DSA Architectures
A classification of DSA concepts according to fundamentals of their architectural principles
is provided in Fig. D.1.
In the ’Autonomous’ approach, commonly referred to as Opportunistic Spectrum Access, a CR link or a group of CRs forming a network autonomously decides on spectrum
use, based on spectral sensing and restricted and governed by predefined policies. The CRs
do not interact with other groups of CRs, other than in terms of responding to the sensed
information on the other CR groups use of spectrum, by adjusting own use of spectrum.
Note that virtually all CR concepts still require a coordination channel between receivers
and transmitters, and between groups of CRs that are to form a network, such that spectrum
decisions can be executed.
F IGURE D.1: A classification of DSA architectural concepts
177
D. O N S PECTRUM S HARING IN AUTONOMOUS AND C OORDINATED DYNAMIC
S PECTRUM ACCESS S YSTEMS : A C ASE S TUDY
In coordinated concepts, a coordination functionality allows spectrum decisions to be
de-conflicted with other users of spectrum. We have here chosen to logically separate the
coordinated concepts into ones requiring infrastructure for the coordination functionality,
and ones not depending on infrastructure. Note that we separate between requiring or not
requiring infrastructure for the coordination functionality itself, and that in both cases, communication infrastructure may be used.
In infrastructure-dependent concepts, the coordination is in the form of spectrum servers
[4, 5], or frequency brokers [6], from which the radio nodes obtain their spectrum leases;
permissions to use a part of the spectrum for some time period. These approaches are often
referred to as centralized coordination, since as viewed from each radio node, it needs to ask
or negotiate with a central entity for permission to use the spectrum. The centralized term
can be misleading, however, as practical implementations need to have a distributed system
of such servers.
Infrastructure-independent concepts include ones where nearby groups of CRs, where
border nodes of the groups have physical layer communication, negotiate and coordinate
their spectrum use [7]. Other alternatives are the ones where the physical medium itself is
used as a means for coordination, e.g. by reserving a frequency band that have pilot tones
mapping to the payload spectrum bands [8]. When making basic assumptions about data
network connectivity for the coordination traffic, a spectrum broker functionality may be
realized as a peer-to-peer network, with the peer broker functionality residing in the CRs
themselves.
For the purpose of being characteristic models for discussing the results in the last part
of this paper, and as viewed from a radio node or logical group of radio nodes, from the
above we formulate the following three different idealistic simplifications:
• The autonomous model: There is no coordination between nodes, each node is only
aware of its sensed spectrum and its transmitted power density.
• The distributed model: In addition to its awareness of sensed spectrum and transmitted
power density, each radio node coordinates with nearby peer nodes. The coordination
possibly will be imperfect.
• The centralized model: Each radio node coordinates with a central entity which has
ideally updated and global knowledge and makes ideal spectrum decisions.
2.2
Spectrum Sharing Principles
Spectrum sharing principles can be broadly categorized [9] under three models. In the Hierarchical Access Model, primary users are spectrum owners and have prioritized rights,
while secondary users may use the spectrum when not interfering with or degrading the primary user communication below a certain level. The Dynamic Exclusive Use Model deals
with property rights or dynamic provision of larger chunks of spectrum. In this paper the
Open Sharing Model is assumed, in which the spectrum is shared between equal rights peer
users [9].
178
3. Near-Optimum Spectrum Sharing Study
3
Near-Optimum Spectrum Sharing Study
3.1
Calculation model
F IGURE D.2: An n-link Interference Model
We base the analysis on a model with n transmitters and n receivers, where neither transmitters nor receivers cooperate [10]. The system may be viewed as n interfering radio links,
see Fig. D.2.
The spectrum band of width B centered on f 0 is considered for spectrum sharing. This
band is divided into M segments, each with centre frequency
fk = f 0 + (k −
M −1
) · ∆f
2
(D.1)
where
B
(D.2)
M
. The transfer function from transmitter i to receiver j, in each segment k, is denoted as
Hij (fk ), see Fig. D.2. The power spectrum of transmitter i, Si (fk ), is assumed constant
over each segment, and is subject to the restriction
∆f =
2 · ∆f ·
M
−1
X
Si (fk ) ≤ Pi
(D.3)
k=0
where we have accounted for positive and negative frequencies. We also make the assumption that over each segment the noise spectrum at each receiver SNi (fk ) can be regarded as
constant, and we assume no interference cancellation in the receivers. Ri may then, deriving
from from [3, 10, 11], be written as
Ri ≈ ∆f ·
M
−1
X
k=0


Si (fk )


log2 1 +
P

Ni (fk ) +
αj Sj (fk )
(D.4)
j6=i
179
D. O N S PECTRUM S HARING IN AUTONOMOUS AND C OORDINATED DYNAMIC
S PECTRUM ACCESS S YSTEMS : A C ASE S TUDY
where
Γ · |Hji (fk )|2
αj (fk ) =
|Hii (fk )|2
(D.5)
Γ · SNi (fk )
|Hii (fk )|2
(D.6)
and
Ni (fk ) =
where Γ is the SNR gap between a practical coding and modulation scheme and the theoretical channel capacity. When M → ∞
Ri =


f0 Z
+B/2
log2 1 +

f0 −B/2
Si (f )

P
 df
Ni (f ) +
αj Sj (f )
(D.7)
j6=i
where
2·
f0 Z
+B/2
Si (f )df ≤ Pi
(D.8)
f0 −B/2
Given B, SNi and Pi for all i, we want to maximize the performance of the system
when subject to predefined constraints on the link rates. Such criterions may be for example
fairness between links or a prioritization of rates between the links. For simplicity we will
here only deal with the simpler case of maximization of the sum of rates without further
individual link constraints, i.e. find the Si (f ) such that
Rsummax = max Rsum = max
n
X
Ri
(D.9)
i=1
Unfortunately, in the general case, Rsum is a non-convex function of the power allocations [10]. The computational complexity of finding Rsummax, referred to as the Global
Optimum (GO), is ’prohibitively high’ [3]. For small n and M we may find the GO for
example through exhaustive search through all power allocations, for the purpose of being
a reference solution. Since the GO solution requires knowledge of all power allocations
and h-parameters, it is best computed in the centralized architectural model and can not be
computed in the autonomous model.
It is argued widely [3, 10] that the Competitive Optimum (CO), in which (D.7) or (D.4)
is iteratively maximized given (D.8) or (D.3), finding Si while regarding all other power
density allocations as constant, is a useful suboptimum solution. Since (D.7) or (D.4) are
convex functions, the optimum in each iteration can be found by Lagrange methods and is
the well known waterfilling (WF) solution. For the case when the spectrum band is divided
into M segments, for each segment k we get
(
Si (fk ) =
Li − N Ii (fk )
0
if N Ii (fk ) ≤ Li
otherwise
(D.10)
where
N Ii (fk ) = Ni (fk ) +
X
αj Sj (fk )
(D.11)
j6=i
Under conditions described in [10] it has been shown that the CO solution calculated by
such iterative WF, converges.
180
3. Near-Optimum Spectrum Sharing Study
An advantage of the CO solution is that it, assuming that the noise and interference is
measured by each receiver node, may be calculated autonomously. However, the target rates
Rtargeti , that that each link regulates its power within its limits to meet, must be within
permissible regions. In order to know that the target rates are within permissible regions [3]
or to find apriori optimum Rtargeti [10], such target rates must be computed by a centralized
agent [3, 10], which is then not an autonomous action. In the case study in next subsection
we will therefore run the algorithm both with optimum Rtargeti and aggressive Rtargeti ,
to see the difference.
3.2
Case Study: Comparison of GO and CO Bit Rates
In this section we study a simple example where there are only two interfering links (n = 2)
and where the spectrum band B is considered as consisting of only two frequency segments
(M = 2).
We make three specific and illustrative deployments of the two links. In the first deployment (Fig. D.3) the two links are separated by a hill and thus have low interference. In the
second deployment (Fig. D.7), the links are deployed in open terrain. Here the interference
is high between both two links. The last example is one (Fig. D.11) where there is high
interference between one of the links and the other one, but the interference is low in the
other direction.
The deployments are made and h-parameters calculated in the Norwegian FEFAS communication planning tool. We have assumed a background noise level of −153 dBm/Hz, a
width of the spectrum band B = 50 kHz, a centre frequency of the band of f0 = 150 MHz,
a maximum power of each transmitter of 1 W and a Γ of 10 dB.
For each scenario we calculate the GO solution as a reference, using brute force numerical search. Against this we calculate and compare the CO solution, for two different cases:
The first case is the idealized one where the optimal target bit rates Rtargeti are known on
beforehand. We use the GO solution as the input to the iterative WF algorithm in this case.
The second case is one where we make the assumption that we do not know the optimal
Rtargeti and use an aggressive Rtargeti instead. The aggressive target is set to a value
which is more than 10 times that of any of the GO values.
3.2.1
A Low Interference Scenario
In this first deployment, the two links, though not far apart geographically, are separated by
a hill and thus have low interference.
Fig. D.4 shows the achievable sum of the bit rates of link 1 and 2, in spectrum segment
1. We see that the maximum sum rate is achieved when both links simultaneously run power
in segment 1.
For each of the GO and CO solutions, we plot the accumulated signal+noise+interference, for each segment in each link, where the noise and interference is
referenced to the transmitter side in accordance with equations D.5 and D.6. The distance
between the top of the red area (signal) and the green area (interference) of each column is
the signal to noise and interference ratio in dB when including the Γ factor, i.e. the actual
signal to noise and interference ratio is Γ dB higher.
181
D. O N S PECTRUM S HARING IN AUTONOMOUS AND C OORDINATED DYNAMIC
S PECTRUM ACCESS S YSTEMS : A C ASE S TUDY
F IGURE D.3: A case with two links. Interference is low as the two links are geographically separated
by a hill
Fig. D.5 shows the GO solution, and Fig. D.6 the CO solution with aggressive Rtargeti .
Optimal Rtargeti provided the same result. In each of the figures, we have included the link
rates and sum rates achieved, as well as the specific h-parameters.
It is observed that the CO solution is identical to the GO solution in this case. Also, it
does not matter whether the WF algorithm works against optimal or aggressive Rtargeti .
This also follows naturally from (D.4) since the capacity expressions for each of the links are
effectively decoupled when the interference is negligible compared to the noise. Thus, in low
interference scenarios, the CO solution will work well, even in an autonomous architecture
with no apriori Rtargeti .
3.2.2
A High Interference Scenario
In the second deployment (Fig. D.7), the links are deployed in open terrain. Here the interference between the two links is higher than the useful signal. Simultaneous transmission
in the same spectrum segments in this case results in very low Rsum, see Fig. D.8. This is
reflected also in the GO solution, see Fig. D.9, in which link 1 uses segment 2 and puts no
power in segment 1, and link 2 uses segment 1. Decent bit rates result. The CO solution,
however, Fig. D.10, is seen to produce very low bit rates (communication breakdown), as
both links attempt to selfishly grab both segments. The same result is produced both with
optimal and aggressive apriori target rates.
3.2.3
A Scenario with Unsymmetrical Interference
In the unsymmetrical interference scenario (Fig. D.11), link 2 is highly interferred by link
1. The receiver of link 1, on the other hand, is on the other side of a mountain from the
transmitter of link 2, giving minimal interference from 2 to 1. The GO solution, Fig. D.12,
182
3. Near-Optimum Spectrum Sharing Study
F IGURE D.4: Sum bit rate in segment 1 in the low interference scenario
F IGURE D.5: Global Optimum in the low interference case
since we maximize Rsum, makes link 2 use both segments, as this is the shortest link. The
CO solution with GO apriori rates arrive at the same result, Fig. D.13. When aggressive
target rates are used, Fig. D.14, both links operate in both segments and significantly worse
Rsum results.
3.3
3.3.1
Practical Implications on each of the Architectural Models
The autonomous model
In the autonomous model, each link may attempt to achieve its optimum Ri , given its maximum available power, by running the WF algorithm. In this way, the total system of links
183
D. O N S PECTRUM S HARING IN AUTONOMOUS AND C OORDINATED DYNAMIC
S PECTRUM ACCESS S YSTEMS : A C ASE S TUDY
F IGURE D.6: Competitive Optimum using aggressive target rates, in the low interference case. Optimal target rates provide the same result.
F IGURE D.7: A case with two links with high interference between each other.
will reach a CO optimum. As illustrated in the examples, this CO optimum is however not
always equal to the maximum Rsum for the total system of links, the GO.
As also illustrated, using apriori optimum Rtargeti in the WF algorithm can in some
constellations give a CO that is closer to the GO, than running with aggressive Rtargeti .
Since, however, such optimum Rtargeti requires centralized pre-computation, and repeated
with any change of the link constellations, this concept is not compatible with autonomous
operation.
Non-optimum Rtargeti in the form of predefined policies, for example geographically
dependent rates, are an alternative. Conservative, low target rates will reduce the risk for
strangulation of other links or communication breakdown, but such risk will still remain, as
illustrated in Fig. D.10. Conservatively defined Rtargeti will also reduce the achieved total
184
3. Near-Optimum Spectrum Sharing Study
F IGURE D.8: Sum bit rate in segment 1, high interference scenario
F IGURE D.9: Global Optimum in the high interference case
Rsum in conditions where the links have low interference.
In high interference conditions, with the Rsum having a non-convexity, we observe that
the GO solution will have power in some link segments turned off, see Fig. D.9. As a
suggestion for a practical autonomous system, we define the maximum number of segments
that a link is permitted to use, 0 ≤ mi ≤ M , such that
mi = β ∗ M/(Φ + 1)
(D.12)
where Φ is an estimated average of links conflicting with link i. It is assumed that Φ is a
location (and time) dependent parameter that the link autonomously can determine from an
185
D. O N S PECTRUM S HARING IN AUTONOMOUS AND C OORDINATED DYNAMIC
S PECTRUM ACCESS S YSTEMS : A C ASE S TUDY
F IGURE D.10: Competitive Optimum using optimal target rates, in the high interference case. Aggressive target rates provide the same result.
F IGURE D.11: A case where the interference is very unsymmetrical, such that the interference from
link 1 to 2 is high, but from 2 to 1 it is low
internal database, using its position as an index. β is a factor that defines the agressivity of
the node in aquiring spectrum segments.
As a very simple example to illustrate this policy, the case in Fig. D.7 with M = 2 and
Φ = 1 and β = 1 is used. The WF algorithm, modified with this restriction (mi = 1) on the
number of segments that can be used, is rerun. The resulting solution is seen to be identical
to the GO solution for this case, see Fig. D.15.
186
3. Near-Optimum Spectrum Sharing Study
F IGURE D.12: Global Optimum in the unsymmetrical interference case
F IGURE D.13: Competitive Optimum using optimal target rates, in the unsymmetrical interference
case.
The downside of the mi approach is that, for β = 1, Rsum will be worse than the GO
solution when the actual number of conflicting links deviates from Φ. However, we may
prioritize avoiding link strangulation and system breakdown by setting β < 1, or prioritize
to take advantage of higher rates for less conflicting environments by setting β > 1.
3.3.2
The distributed model
In the distributed model, each link, in addition to knowing its own rate, power and the accumulated noise and interference it is subject to, may get partial knowledge about the wellbeing of other links in its geographical vicinity. This may be achieved through administrative
traffic between the nodes.
While theoretically any level of detail of information may be exchanged between links,
a simple case is the exchange of link bit rates Ri only. By exchanging Ri s with links in a
geographical vicinity, each link may monitor both its own rate and the sum rate and average
link rate in the region. With this knowledge, each link may take action to achieve optimum
187
D. O N S PECTRUM S HARING IN AUTONOMOUS AND C OORDINATED DYNAMIC
S PECTRUM ACCESS S YSTEMS : A C ASE S TUDY
F IGURE D.14: Competitive Optimum using aggressive target rates, in the unsymmetrical interference case.
F IGURE D.15: Competitive Optimum using aggressive target rates and limiting allowed number of
segments to mi = 1, in the high interference case.
average rate in its region, rather than only optimizing its own rate. Hence, with the distributed model, it is expected that it is possible to come closer to the GO solution than in the
autonomous case.
One possible practical implementation of a scheme for regional rate optimization is one
where it is again used that in the case of conflicting links, a solution closer to the GO one
may be found by turning off one or more link segments. This may be implemented by, in
addition to the power waterfilling to meet the node’s target rate, adding a loop that regulates
mi in each link to achieve maximum average rate in the region.
3.3.3
The centralized model
In the centralized model, the decision maker has full knowledge of all transmission links,
and hence theoretically the total information capacity achieved can approach the GO solution. As illustrated in the case study, such GO spectrum sharing solutions may provide better
188
4. Conclusions
total information rates than the selfish CO rates. Hence theoretically, and neglecting computational complexity and administrative communication overhead issues, the centralized
model will outperform the distributed and autonomous one.
As pointed out in [3], however, finding the GO solution in the general case quickly
becomes intractable as N and M increases. Still, practical compromises that perform better
than the CO solution, and with acceptable complexity, may be found. As an example to illustrate this point, allow all links to iterate to the CO solution, as they would do autonomously.
From the case study it was seen that in case of link conflicts giving a non-convex Rsum,
potentially a better solution can be found by turning off one or more link segments. The
central, having all information, may identify such conflicts and instruct the turning off of
segments of individual links. Further, the central may also verify that the resulting Rsum
has actually increased.
4
Conclusions
We have investigated Global Optimum and Competitive Optimum for an n-link system sharing a spectrum band B with M segments. For n=M=2 we have done a case study of three different link deployments, with low- high and unsymmetric interference, and investigated how
well the CO solution approximates the GO one. We have found that for the low-interference
cases, the CO solution is identical to the GO one. The high-interference case resulted in very
low rates (communication breakdown) compared to the GO solution. In the unsymmetric
case, correct optimal apriori target rates lead to the CO solution being identical to the GO
one, while aggressive target rates gave a less optimum solution.
Autonomous links may run the waterfilling algorithm autonomously and as such provide
the CO solution for the system of links. For conflicting links with high interference, this
CO solution may however be very suboptimal, and worst-case communication breakdown
results. A suggested policy for reducing the risk of very suboptimal rates is to limit the
maximum number of segments that a particular link is allowed to use.
With the distributed model, information rate conflicts between links can be detected
for example through the exchange of Ri . Through responding actions it is expected that
significantly improved total Rsum over the autonomous model can be achieved, however
this needs to be demonstrated.
References
[1] FCC Spectrum Policy Task Force, “Report of the spectrum efficiency working group,”
Tech. Rep., Nov. 2002.
[2] J. Mitola III, “Cognitive Radio An Integrated Agent Architecture for Software Defined
Radio,” Doctoral dissertation, Royal Institute of Technology (KTH), SE-164 40 Kista,
Sweden, May 2000.
[3] S. Haykin, “Cognitive Radio: Brain-Empowered Wireless Communications,” IEEE
Journal on Selected Areas in Communications, vol. 23, no. 2, pp. 201–220, Feb. 2005.
[4] M. Buddhikot, P. Kolodzy, S. Miller, K. Ryan, and J. Evans, “DIMSUMnet: New
directions in wireless networking using coordinated dynamic spectrum,” in World of
189
R EFERENCES
Wireless Mobile and Multimedia Networks, 2005. WoWMoM 2005., pp. 78–85, June
2005.
[5] V. Brik, E. Rozner, S. Banerjee, and P. Bahl, “DSAP: A protocol for coordinated spectrum access,” in First IEEE International Symposium on New Frontiers in Dynamic
Spectrum Access Networks, 2005. DySPAN 2005., pp. 611–614, Nov. 2005.
[6] T. Maseng and T. Ulversøy, “Dynamic Frequency Broker and Cognitive Radio,” in
The IET Seminar on Cognitive Radio and Software Defined Radios: Technologies and
Techniques, Sep. 2008.
[7] J. Zhao, H. Zheng, and G.-H. Yang, “Distributed coordination in dynamic spectrum
allocation networks,” in New Frontiers in Dynamic Spectrum Access Networks, 2005.
DySPAN 2005., pp. 259–268, Nov. 2005.
[8] L. Ma, X. Han, and C. C. Shen, “Dynamic open spectrum sharing MAC protocol for
wireless ad hoc networks,” in First IEEE International Symposium on New Frontiers in
Dynamic Spectrum Access Networks, 2005. DySPAN 2005., pp. 203–213, Nov. 2005.
[9] Q. Zhao and B. Sadler, “A survey of dynamic spectrum access,” IEEE Signal Processing Magazine, vol. 24, no. 3, pp. 79–89, May 2007.
[10] W. Yu, “Competition and Cooperation in Multi-User Communication Environments,”
Ph.D. dissertation, Stanford University, June 2002.
[11] J. R. Barry, D. G. Messerschmitt, and E. A. Lee, Digital Communication: Third Edition. USA: Springer, 2003.
190
Paper E
A Comparison of Centralized, Peer-to-Peer and Autonomous Dynamic Spectrum
Access in a Tactical Scenario
Tore Ulversøy, Torleiv Maseng, Toan Hoang and Jørn Kårstad
Presented at
MILCOM 2009
Boston, USA
October 18-21, 2009
Published online at http://ieeexplore.ieee.org
Summary of this candidate’s contribution to this paper:
This candidate has contributed with the following parts of the article text:
Section 1 Introduction
Section 2 Background and Calculation Models
Section 3 Spectrum Decisions
Section 4 Computational Complexity
Section 7 Conclusions
and also parts of
Section 5 Spectrum Coordination Traffic
Section 6 DSA in a Hostile Environment
Also the candidate has contributed with the conceptual ideas for the autonomous and distributed
algorithms that are described in the article, has programmed the Matlab model for these with the
exception of the programming of the generation of random links, and has made the graphs / figures.
191
Abstract
Several concepts for more dynamic spectrum access have been suggested:
1. Terminals making autonomous decisions without coordination
2. Networks cooperating in Peer-to-Peer (P2P) constellations without an infrastructure
3. Centralized spectrum server concepts which makes local decisions and coordinate
through a broker hierarchy.
In this paper we compare these three types of concepts in terms of spectrum decisions,
computational complexity in the radio nodes and the need for spectrum coordination
traffic between nodes. We present practical algorithms in terms of complexity and performance. We conclude with discussing the usefulness of these systems for military
deployable communications systems in a hostile environment.
c
2009
IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained
for all other uses, in any current or future media, including reprinting/republishing this material for
advertising or promotional purposes, creating new collective works, for resale or redistribution to
servers or lists, or reuse of any copyrighted component of this work in other works.
DOI: 10.1109/MILCOM.2009.5380105
193
1. Introduction
1
Introduction
Experience from military operations has shown that pre-operation frequency plans may
sometimes cause communication problems when in the actual theatre of operation, for example due to un-anticipated interference in the theatre. As an example, it was reported [1]
from operations in Iraq that many military systems experienced interference from civilian
systems. Examples of such interference sources were unregulated long range cordless telephones, pirate television and radio broadcast stations, and communications equipment used
by relief organizations and media representatives. Key challenges of such interference were
found to be unreliable Command and Control (C2) systems and potential loss of communication with Unmanned Aerial Vehicles (UAVs) [1].
The Network Centric Operations (NCO) paradigm, with its focus on information sharing
and speed of command, further increases the demand for wireless communication resources,
and thus spectrum, in the theatre of operation. Moreover, the mobility and flexibility of
operations causes variable spectrum demand over locations and time, which again is difficult
to take fully into account in pre-operation static frequency planning.
Dynamic Spectrum Access (DSA) has been suggested as a way to have a more scenarioand situation-adapted use of spectrum. DSA is a capability that contrasts the static frequency
planning and that allows flexible and dynamic spectrum allocation. A number of DSA architectural approaches have been suggested, including autonomous Cognitive Radio (CR)
ones, centralized coordination ones and approaches having Peer-to-Peer (P2P) distributed
coordination.
In this paper we compare these three types of architectural concepts in terms of spectrum
decisions, computational complexity in the radio nodes and the need for spectrum coordination traffic between nodes. We further discuss their usefulness in complex and hostile
electromagnetic environments.
2
Background and Calculation Models
In this section, as background information for later discussions, we review the different
DSA architectures. We then review a link interference receiver-centric model for spectrum
sharing, which we will refer to when comparing the DSA architectures.
DSA Architectures
An overview of DSA architectural concepts is provided in Fig. E.1.
Dynamic Spectrum Access
Coordinated
Centralized
Autonomous
Distributed
F IGURE E.1: DSA architectural concepts
In the autonomous approach, frequently referred to as Opportunistic Spectrum Access
(OSA) [2], the availability of a spectrum region is detected through sensing. The sensing
195
E. A C OMPARISON OF C ENTRALIZED , P EER - TO -P EER AND AUTONOMOUS DYNAMIC
S PECTRUM ACCESS IN A TACTICAL S CENARIO
may be conducted by a single radio node, or by a group of nodes collectively. The spectrum
availability detection additionally may be assisted by internal spectrum database information. The nodes or node groups make their spectrum decisions autonomously, but their
behaviour is governed by policies [2].
An OSA example is DARPA’s XG [2, 3] program. A commercial domain example is the
802.22 [4], which is a CR standard for WRAN opportunistic reuse of spectrum allocated primarily to TV transmitters. 802.22, however, also has an element of distributed coordination,
in having beacon self-coexistence measures [4] drafted in the standard, to ease coexistence
with neighbour 802.22 WRANS.
Centralized coordination systems, Fig. E.1, rely on infrastructure, for example spectrum
broker servers. The infrastructure may receive information from and have a complete view
of all the nodes, and may manage their use of spectrum. The management may be direct,
in the form of time limited spectrum leases, or in the form of providing or restricting spectrum opportunities for the managed entitites. Allthough a distributed hierarchy of spectrum
servers is usually assumed, we still refer to the architecture as centralized, since from the
entity that uses the spectrum it appears as if it is coordinating with a central entity.
Examples of published centralized architectures are DSAP [5], DIMSUMNET [6], the
Ofcom DSA Candidate Architecture [7] and the Dynamic Frequency Broker [8].
In distributed coordination architectures, Fig. E.1, the coordination occurs between the
spectrum use entities (radio nodes, sets of nodes or base stations) directly, rather than depending on an infrastructure. The coordination allows a radio node to also take into account
the spectrum use and Quality of Service (QoS) of other nodes in the geographical vicinity,
when determining its own use of spectrum.
The simplest examples are coordination using beacons [4], or coordination through the
mapping of spectrum occupancy to a busy tone occupancy band [9]. In other examples the
coordination is through a spectrum etiquette protocol [10] on a common control channel, or
distributed self-organizing coordination not depending on a common channel [11]. Assuming the availability of a network connection for the spectrum coordination, radio nodes may
also interact as peers in a Peer-to-Peer (P2P) network.
n-link Interference Model
We will compare the DSA architectures using a receiver-centric interference model [12],
F IGURE E.2: An n-link Interference Model
196
2. Background and Calculation Models
where we have n simplex TX-RX links, ref figure E.2. We may think of this as n video
transfer links in the theatre of operation. Alternatively we may view the model as a time
snapshot of a wireless network, where we argue that any wireless network may be viewed
as a superposition of time-dependent simplex links.
We further assume that the spectrum band that the links share on a dynamic access basis
has a width B and is centered on f 0. The band is divided into M segments, each with centre
frequency
M −1
) · ∆f
(E.1)
fk = f 0 + (k −
2
where
B
∆f =
(E.2)
M
The transfer function from transmitter i to receiver j, in each segment k, is denoted as
Hij (fk ), see Fig. E.2. The power spectrum of transmitter i, Si (fk ), is assumed constant over
each segment, and is subject to the restriction
2 · ∆f ·
M
−1
X
Si (fk ) ≤ Pi
(E.3)
k=0
where Pi is the maximum available power for link i. We also make the assumption that over
each segment the noise spectrum at each receiver SNi (fk ) can be regarded as constant, and
we assume no interference cancellation in the receivers. The information rate Ri may then,
using results from from [12–14], be written as
Ri ≈ ∆f ·
M
−1
X


log2 1 +

k=0
Si (fk )

P

Ni (fk ) +
αji Sj (fk )
(E.4)
j6=i
where
αji (fk ) =
Γ · |Hji (fk )|2
|Hii (fk )|2
(E.5)
and
Γ · SNi (fk )
(E.6)
|Hii (fk )|2
where Γ is the SNR gap between a practical coding and modulation scheme and the theoretical channel capacity.
The maximization of Rsum , where
Ni (fk ) =
Rsum =
n
X
Ri
(E.7)
i=1
in the general case is complex, as Rsum is a non-convex function of the power allocations
[12]. It is argued [12, 14] that the Competitive Optimum (CO), in which (E.4) is iteratively
maximized given (E.3), finding Si while regarding all other power density allocations as
constant, is a useful suboptimum solution. The solution of each iteration is the well known
Waterfilling (WF) solution [14], which for each segment is
Si (fk ) =


 Li


− N Ii (fk )
0
if N Ii (fk ) ≤ Li
otherwise
(E.8)
197
E. A C OMPARISON OF C ENTRALIZED , P EER - TO -P EER AND AUTONOMOUS DYNAMIC
S PECTRUM ACCESS IN A TACTICAL S CENARIO
where Li is a constant ’filling level’, see illustration in Fig. E.3, defined by this equation and
by the restriction on Si (fk ) in (E.3) and where
N Ii (fk ) = Ni (fk ) +
X
αji Sj (fk )
(E.9)
j6=i
Power
density
Li
fk
Si(fk)
NIi(fk)
Frequency
F IGURE E.3: Illustration of the filling level, Li
Depending on the actual deployment of the links, the CO solution may be significantly
suboptimal and may cause strangulation of individual links [15]. The discussion in the next
section will include policies and measures to avoid such strangulation.
3
Spectrum Decisions
In this section, based on the described n-link interference model, spectrum decisions and
resulting bit rates in the autonomous, distributed and centralized model are compared. We
apply a criterion of optimality which is a tradeoff of fairness and optimum use of the spectrum band; as a primary priority each link requires a minimum rate of Rmin , secondly a
maximum sum rate Rsum accumulated over all n links is desired. We simulate all the proposed spectrum decision algorithms in MATLAB based on 5 randomly generated 20-link
(n=20) scenarios, of which one is shown in Fig. E.4. The size of the scenario is 20 by 20
km, and the link distances are uniformly distributed between 0 and 3.5 km. The power limitation is set to be equivalent to 1W in 25kHz, and the average noise set to -153 dBm /Hz,
with a small random variation (less than 0.1dB) between the individual link-segments. The
propagation model used is Egli [16] with antenna heights 3m. A Γ value of 10 is used. Hji
is assumed constant over B. M is set to 40.
Autonomous: With autonomous links, it is assumed that each link senses its accumulated
noise and interference and does spectrum decisions indepently of the interference it causes
to other users. The optimum selfish power density allocation that each link can do in this
case is the WF allocation, which is repeated iteratively (IWF).
It is further assumed that each link i regulates its actual power, within the limitation
Pi , to meet its apriori determined target information rate, Rtargeti . If Ri < Rtargeti , the
power to be used in the next iteration is multiplied by a factor 1.05. If Ri > Rtargeti · 1.1,
the power is divided by 1.05.
In [12, 14] it is pointed to that these apriori target rates should be in an achievable region,
and that these need to be calculated by a centralized agent. In this subsection it is assumed
198
3. Spectrum Decisions
F IGURE E.4: One of the 20-link scenarios
that such a centralized agent is not available, and that the apriori rates are instead determined
by the links autonomously, based on location dependent policies.
The simplest case, and our autonomous reference, assumes that each link is maximally
selfish within its power limitation. The IWF is run with equally aggressive and unrealistically high target rates, here 20 ∗ B, for all links. It is seen that in this case decent average
Rsum (see bottom part of Fig. E.7 for mi = M , where mi is the maximum number of segments that each link may use) is achieved, but the link rates are very unfairly distributed,
as is seen from Fig. E.5. When increasing Rmin , the percentage of operational links drops
rapidly.
F IGURE E.5: In autonomous mode with aggressive target rates, the percentage of operational links
drops rapidly as Rmin increases
While the simulations show that setting aggressive target rates provide attractive average
bit rates per link, the fact that a significant percentage of the links get very low bit rates is
unattractive, particularly in a military scenario. To allow better coexistence of links up to a
199
E. A C OMPARISON OF C ENTRALIZED , P EER - TO -P EER AND AUTONOMOUS DYNAMIC
S PECTRUM ACCESS IN A TACTICAL S CENARIO
F IGURE E.6: A1: % operational links (top) and average bit rate, versus Rtargeti .
minimum rate Rmin , at the expense of allowing a lower average rate over all the links, we
define and simulate three different policies:
• A1: Each link, instead of using an aggressive target rate, uses a target rate of Rtargeti
• A2: Each link may use a maximum of mi segments, where mi ∈ 1..M
• A3: As A2, but the maximum power of each link is reduced proportionally to mi /M
It is envisioned that in a practical system the policies are location dependent and determined from an internal database with the link’s position as search index. Locations where it
is known that there will be a high number of nodes/links may then have low mi or Rtargeti ,
whereas locations with a lower density of links may have higher mi or Rtargeti .
Both the A1 and A2 policies are seen to be effective in allowing coexistence at a minimum rate (Figs. E.6-E.7). Comparing at Rmin = B/4, the A2 strategy provides a higher
average rate at near 100 % operational links (the number of segments restricted to between
5 and 7) than A1. A3 also provided a higher number of operating links as mi was reduced
(Fig. E.8), but was less favourable compared to A2 and A1.
A drawback of these coexistence measures is that pre-estimation of link densities is
assumed. If the link density is higher than estimated, non-operational links may result. If
the link density is lower than estimated, the resulting additional spectrum availability is not
fully exploited in increased average rates.
Distributed: When allowing the links to have administrative communication between
themselves, each link may both take into account its own QoS and its sensed interference,
but may also take into consideration the QoS of other links in the vicinity.
Here, the following two strategies have been simulated:
200
3. Spectrum Decisions
F IGURE E.7: A2: % operational links (top) and average bit rate, versus mi
• D1: Each link runs IWF. In case a link i does not reach Rmin , it increases its mi by
·γ)
, where γ = 1.1. The increase is minimum 1,
the nearest integer value of mi · (Rmin
Ri
and the resulting value is truncated to M. The link then sends out a request message,
broadcasted within a coordination radius rc . The message signifies its Pi and Hii as
well as its receiver position and antenna gain. All links j within rc , that through the
calculation of PjP·αi ji determine that they are a significant contributor to the interference
at i, reduce their mi by 1, if they can do so without violating their own Rmin . ’Significant contributor’ is in these simulations defined as PjP·αi ji > 0.5 . αji is assumed
to be determined by (E.5) with Hji calculated using a propagation model. Realistic
transmission of request messages is not included in the simulations, ideal lossless and
0-delay transfer of requests between the links is assumed.
• D2: Same as D1, but in case link i does not meet Rmin , it alternates between Rtarget
reduction requests and mi reduction requests. Prior to a target reduction request, link
i increases its own Rtarget such that the new Rtarget becomes
Rtargeti = Rmin · γ + (Rtargeti − (Rmin · γ)) · β
(E.10)
where γ = 1.1 and β = 1.15. A link k that in the same way as for D1 determine that
it is a significant contributor to the interference at i, adjusts its Rtarget to
Rtargetk = Rmin · γ + (Rtargetk − (Rmin · γ))/β
(E.11)
The results from the simulation of D1, see Fig. E.9, using the same 5 times 20 links as in
the A1..A3 cases, show that 100 % operational links is achieved for Rmin of B/4 and B/8
201
E. A C OMPARISON OF C ENTRALIZED , P EER - TO -P EER AND AUTONOMOUS DYNAMIC
S PECTRUM ACCESS IN A TACTICAL S CENARIO
F IGURE E.8: A3: % operational links (top) and average bit rate, versus mi , with P reduced proportionally to mi/M
when the coordination distances are above approx. 6km. D2 provided very similar results,
with a slight improvement in operational links for B/2. The corresponding average bit rates
for D1..D2 are significantly higher than when using the A1..A3 policies, see Fig. E.10.
Centralized: As will be revisited under computational complexity, optimum power density allocation for all the links has very high computational complexity, and hence practical
suboptimal algorithms are needed.
One such practical algorithm may be running D1 or D2 in the centralized configuration.
The algorithm may either be run completely in the central, or utilizing the links themselves
for the WF operations and only processing the back off requests in the central. Results in
this case (for D1) will be as in Fig. E.9, for the maximum coordination distance.
A less optimum, but an attractive practical compromise in a centralized architecture,
is using a transmitter-centric model. Here, a group of transmitters that share a spectrum
license, are authorized to use the spectrum in a geographic region. Outside of this region,
a protection region is defined, outside of which the signal from any of the transmitters is
below a threshold level and deemed insignificant. A spectrum assignment operation will
then merely consist of finding the first available segment(s) in the desired geographic region.
If none are available, the request will need to be rejected.
4
Computational Complexity
Autonomous: With autonomous links, we have made the assumption here that each link
measures its receiver noise and interference, and computes its power density versus spectrum
by running the WF algorithm. This implies full scalability with the number of links, n.
202
4. Computational Complexity
F IGURE E.9: D1: % operational links (top) and average bit rate. Mitigating interference by lowering
the maximum number of segments of interferers
F IGURE E.10: Comparison of average R/B between A1..D2, at Rmin =B/8. The highest Rtargeti
and mi that gave 100% oper. links have been chosen for A1..A3. Coordination radius 8.7km.
The complexity is, however, dependent on the number of segments. For each iteration,
the segments need to be sorted according to their noise+interference level. The sort operation
is between O(M · log(M )) and O(M ) depending on algorithm. Further, filling the power
operations,
densities in each segment in our implementation implies an additional m(m+1)
2
where m is the actual number of segments that have a power density larger than 0.
The calculation needs to be repeated with a period t, where t depends on the tolerance
203
E. A C OMPARISON OF C ENTRALIZED , P EER - TO -P EER AND AUTONOMOUS DYNAMIC
S PECTRUM ACCESS IN A TACTICAL S CENARIO
for suboptimal link rates.
Distributed: Assuming here the D1 or D2 algorithm, the basic computational complexity
is the same as for the autonomous case, but with additional computational steps needed due
to the interaction between the links. Each link will have k additional operations per time
period t, where k is the number of links sending back off requests within a distance rc .
Centralized: In a general case, the maximum Rsum of a group n of links is a non-convex
function [14] of power density allocations in the various links, and the exact solution is NP
hard, with the computing time being exponential in n.
Global search heuristics, such as genetic algorithms, are a way of finding practical approximate solutions.
As mentioned in the previous section, other practical approximate compromises can be
found, an example being that each link runs the WF as in the D1 or D2 algorithm, with the
central handling only the backoff requests.
If using the transmitter-centric model with defined protection regions instead, as explained in the previous section, the spectrum users are effectively decoupled, and the computational complexity reduced accordingly.
5
Spectrum Coordination Traffic
Described solutions for the coordination traffic include the combining of IP core network and
specific Spectrum Information Channels [6], as well as dedicated or self-configuring [11]
control channels. For a centralized architecture, the IP core network approach, possibly with
control channels connecting into the IP network from the spectrum users, yields a flexible
solution and allows standardized middleware to be used. For distributed P2P interaction,
direct control channel communication is also attractive.
The more detailed and frequent the interchanged information is, the more optimal spectrum decisions can be made, but the more communication capacity is used for the coordination traffic. In Table E.1, the first alternative listed is a simple short message signalling
‘ I am not achieving Rmin ’. We found the spectrum decisions with only this coordination
parameter to be not as good as with the D1 or D2 alternatives, the second row in the table.
Table E.2 outlines one traffic alternative for centralized, where link parameters are sent to
the spectrum server, and the spectrum server responds with a power density assignment over
all the relevant spectrum segments (as for example if the spectrum server is running the D1
or D2 algorithm internally).
TABLE E.1: Two coordination traffic alternatives for distributed.
Parameters sent from link
Message signalling that Ri < Rmin
Pi , Hii , position, antenna gain
Comment
Simple alternative
D1 and D2
For the centralized approach, high coordination traffic load will result if for each change
in the network new settings will need to be transferred to all the links / nodes. A way of
reducing the traffic is to make new spectrum allocations suboptimally, changing interference
conditions with as few other links/nodes as possible. Then reoptimization of a whole region
of links may occur at intervals or only when needed.
204
6. DSA in a Hostile Environment
TABLE E.2: A coordination traffic alternative for centralized.
From link
Pi , position, antenna gain
From
server
Si (fk )
Comment
Spectrum decision in
server (all Hij computed centrally)
Making the coordination traffic on-demand only, decreases the amount of information
that needs to be transferred. As an example, for the D2 algorithm and for Rmin = B/4, the
coordination traffic drops close to 0 when the system of links converges, see Fig. E.11.
F IGURE E.11: D2: The number of messages sent per iteration, averaged over the 5 sets, and with
the same conditions as described previously. The algorithm is started from iteration 21.
With reference to Fig. E.11, it should be noted that no attempt has been made to optimize
the speed of convergence of the algorithm, and that it is expected that faster convergence is
possible.
6
DSA in a Hostile Environment
The discussion in this section is not restricted to the link interference model.
Given the reconfigurability flexibility and multi-use abilities of future Software Defined
Radio platforms, it is likely that the future tactical environment will see an increasing trend
of integration of communication and Electronic Warfare (EW) functionality. An example is
the use of the communication equipment as sensors. Such integration will assist in providing a higher degree of situational awareness, allowing situation-aware defensive measures as
well as offensive measures. In this context, coordinated DSA concepts will have an advantage in their ability to do collaborative such surveillance, defensive and offensive actions.
In particular a centralized system will provide an opportunity to present precise electromagnetic situation awareness overviews, as the centralized system may compare its knowledge
205
R EFERENCES
about the reported transmitters in a certain area with spectrum scans conducted by receiving
nodes in the same area.
All three concepts will handle well any static interference in the theatre of operation,
such as that encountered due to unregulated or apriori unforeseen transmitters. An autonomous system may have a faster response to dynamic interference than a centralized
one where communication delays will have a prolonging effect on decisions.
A drawback of centralized and distributed DSA systems is the reliance on and vulnerability of the coordination channels. These will need to have proper TRANSEC and COMSEC
measures. For tactical use it is advantageous if the system falls back to an autonomous one
if the administrative communication channels should be blocked. As an example, the D1
and D2 systems can be made to fall back into A1 or A2 systems when there is a lack of
administrative communication.
7
Conclusions
We have compared centralized, distributed and autonomous DSA in terms of spectrum decisions, computational complexity, coordination traffic and concerns in a hostile environment.
In particular we have examined and simulated, based on an n-link interference model, three
different autonomous spectrum decision policies and two distributed interaction algorithms.
We found that when requiring all links to meet the same minimum rate, the distributed interaction provided a higher average rate over all the links, than the autonomous policy-governed
cases. The distributed interaction algorithms appear to be promising compromises between
optimality of spectrum decisions, computational complexity and interaction traffic.
References
[1] D. P. Johnson, “Dismounted Urban Tactical Communications Assessment / Urban
Spectrum Management,” in RTO-MP-IST-083 Military Communications with a Special
Focus on Tactical Communications for Network Centric Operations, Apr. 2008.
[Online]. Available: ftp://ftp.rta.nato.int/PubFullText/RTO/MP/RTO-MP-IST-083/
Supporting%20Documents/MP-IST-083-10.pps
[2] C. Tran, R. Lu, A. Ramirez, C. Phillips, and S. Thai, “Dynamic Spectrum Access:
Architectures and implications,” in Military Communications Conference, 2008. MILCOM 2008., pp. 1–7. IEEE, Nov. 2008.
[3] M. McHenry, K. Steadman, A. Leu, and E. Melick, “XG DSA Radio System,” in
3rd IEEE Symposium on New Frontiers in Dynamic Spectrum Access Networks, 2008.
DySPAN 2008., pp. 1–11, Oct. 2008.
[4] C. Cordeiro, K. Challapali, D. Birru, and S. Shankar, “IEEE 802.22: An Introduction to
the First Wireless Standard based on Cognitive Radios,” Journal of Communications,
vol. 1, no. 1, pp. 38–47, Apr. 2006.
[5] V. Brik, E. Rozner, S. Banerjee, and P. Bahl, “DSAP: A protocol for coordinated spectrum access,” in First IEEE International Symposium on New Frontiers in Dynamic
Spectrum Access Networks, 2005. DySPAN 2005., pp. 611–614, Nov. 2005.
206
References
[6] M. Buddhikot, P. Kolodzy, S. Miller, K. Ryan, and J. Evans, “DIMSUMnet: New
directions in wireless networking using coordinated dynamic spectrum,” in World of
Wireless Mobile and Multimedia Networks, 2005. WoWMoM 2005., pp. 78–85, June
2005.
[7] Roke Manor Research Ltd et al, for Ofcom, “Study into Dynamic Spectrum Access
Summary Report,” Ofcom, Tech. Rep., Mar. 2007.
[8] T. Maseng and T. Ulversøy, “Dynamic Frequency Broker and Cognitive Radio,” in
The IET Seminar on Cognitive Radio and Software Defined Radios: Technologies and
Techniques, Sep. 2008.
[9] L. Ma, X. Han, and C. C. Shen, “Dynamic open spectrum sharing MAC protocol for
wireless ad hoc networks,” in First IEEE International Symposium on New Frontiers in
Dynamic Spectrum Access Networks, 2005. DySPAN 2005., pp. 203–213, Nov. 2005.
[10] D. Raychaudhuri and X. Jing, “A spectrum etiquette protocol for efficient coordination
of radio devices in unlicensed bands,” in 14th IEEE Proceedings on Personal, Indoor
and Mobile Radio Communications, 2003. PIMRC 2003., vol. 1, pp. 172–176 Vol.1,
Sep. 2003.
[11] J. Zhao, H. Zheng, and G.-H. Yang, “Distributed coordination in dynamic spectrum
allocation networks,” in New Frontiers in Dynamic Spectrum Access Networks, 2005.
DySPAN 2005., pp. 259–268, Nov. 2005.
[12] W. Yu, “Competition and Cooperation in Multi-User Communication Environments,”
Ph.D. dissertation, Stanford University, June 2002.
[13] J. R. Barry, D. G. Messerschmitt, and E. A. Lee, Digital Communication: Third Edition. USA: Springer, 2003.
[14] S. Haykin, “Cognitive Radio: Brain-Empowered Wireless Communications,” IEEE
Journal on Selected Areas in Communications, vol. 23, no. 2, pp. 201–220, Feb. 2005.
[15] T. Ulversøy, T. Maseng, and J. Kårstad, “On Spectrum Sharing in Autonomous and
Coordinated Dynamic Spectrum Access Systems: A Case Study.” Wireless Vitae
2009, May 2009.
[16] Radio Link Calculation, http://www.radius.net/egli–free-space-calculator.html.
(2009/04/19/
2009).
[Online].
Available:
http://www.radius.net/
egli--free-space-calculator.html
207