An Overview of Software Defined Radio Technologies

Petri Isomäki | Nastooh Avessta
An Overview of Software Defined
Radio Technologies
TUCS Technical Report
No 652, December 2004
An Overview of Software Defined
Radio Technologies
Petri Isomäki
University of Turku, Department of Information Technology
Lemminkäisenkatu 14 A, 20520 Turku, Finland
[email protected]
Nastooh Avessta
University of Turku, Department of Information Technology
Lemminkäisenkatu 14 A, 20520 Turku, Finland
[email protected]
TUCS Technical Report
No 652, December 2004
Abstract
Software Defined Radio is an emerging technology that has been an active research topic for over a decade. The terms software defined radio and software
radio are used to describe radios whose implementation is largely software-based.
These radios are reconfigurable through software updates. There are also wider
definitions of the concept.
Various military software defined radio programs were the pathfinders that
proved the viability of the concept. Latests of these projects have produced radios
that are already replacing legacy systems.
The software radio technology is rapidly advancing, at least on most fronts.
There is an ongoing standardisation process of framework architectures that enable portability of e.g. waveform processing software across radios for various
domains.
Software defined radios are beginning to find also commercial potential. When
the software defined radio becomes mainstream, the full potential of adaptability
may create possibilities for new kind of services. From the users’ point of view,
seamless operation across networks, without caring about the underlying technology, would be a very desirable feature.
Keywords: Software Defined Radio (SDR), Radio Frequency (RF) front end,
Analog-to-Digital Converter (ADC), Digital Signal Processor (DSP), Software
Communications Architecture (SCA), SWRadio, programmable, reconfigurable
TUCS Laboratory
Communication Systems Laboratory
Contents
1
Introduction
2
Implementation Aspects
2.1 Radio Frequency Front End . . . . . . . . . .
2.1.1 Superheterodyne Architecture . . . .
2.1.2 Direct Conversion Architecture . . .
2.1.3 Tuned RF Receiver . . . . . . . . . .
2.1.4 Other Architectures . . . . . . . . . .
2.2 A/D and D/A Conversion . . . . . . . . . . .
2.2.1 Noise and Distortions in Converters .
2.2.2 Sampling Methods . . . . . . . . . .
2.2.3 Converter Structures . . . . . . . . .
2.3 Digital Processing . . . . . . . . . . . . . . .
2.3.1 Selection of the Processing Hardware
2.3.2 Multirate Processing . . . . . . . . .
2.3.3 Digital Generation of Signals . . . . .
2.3.4 Bandpass Waveform Processing . . .
2.3.5 Baseband Waveform Processing . . .
2.3.6 Bit-stream Processing . . . . . . . .
2.4 Reconfiguration and Resource Management .
2.5 Summary . . . . . . . . . . . . . . . . . . .
3
1
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Standards
3.1 Air Interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.2 Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.3 Middleware . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.3.1 Model Driven Architecture (MDA) . . . . . . . . . . .
3.3.2 Common Object Request Broker Architecture (CORBA)
3.3.3 Interface Definition Language (IDL) . . . . . . . . . . .
3.3.4 Unified Modeling Language (UML) . . . . . . . . . . .
3.3.5 Extensible Markup Language (XML) . . . . . . . . . .
3.4 Software Communications Architecture (SCA) . . . . . . . . .
3.4.1 Application Layer . . . . . . . . . . . . . . . . . . . .
3.4.2 Waveform Development . . . . . . . . . . . . . . . . .
3.4.3 SCA Reference Implementation (SCARI) . . . . . . . .
3.5 SWRadio . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.5.1 SWRadio Platform . . . . . . . . . . . . . . . . . . . .
3.5.2 SWRadio Architecture . . . . . . . . . . . . . . . . . .
3.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
3
3
4
5
6
6
7
8
10
11
11
12
13
13
14
14
15
15
16
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
17
17
19
20
21
21
22
23
23
24
25
26
27
27
28
28
29
4
5
Software Defined Radio Projects
4.1 SPEAKeasy . . . . . . . . . . . . . . . . . . . . . . .
4.1.1 SPEAKeasy Phase I . . . . . . . . . . . . . .
4.1.2 SPEAKeasy Phase II . . . . . . . . . . . . . .
4.2 Joint Tactical Radio System (JTRS) . . . . . . . . . .
4.2.1 Background . . . . . . . . . . . . . . . . . . .
4.2.2 Architecture . . . . . . . . . . . . . . . . . . .
4.2.3 Wireless Information Transfer System (WITS)
4.2.4 SDR-3000 . . . . . . . . . . . . . . . . . . .
4.3 Other SDR Projects . . . . . . . . . . . . . . . . . . .
4.3.1 Joint Combat Information Terminal (JCIT) . .
4.3.2 CHARIOT . . . . . . . . . . . . . . . . . . .
4.3.3 SpectrumWare . . . . . . . . . . . . . . . . .
4.3.4 European Perspective: ACTS and IST Projects
4.3.5 GNU Radio . . . . . . . . . . . . . . . . . . .
4.4 Summary . . . . . . . . . . . . . . . . . . . . . . . .
Conclusions
Abbreviations
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
30
30
30
32
33
34
35
35
36
36
36
37
38
38
39
40
42
44
1
Introduction
Software defined radio is an emerging technology that is profoundly changing
the radio system engineering. A software defined radio consists of functional
blocks similar to other digital communication systems. However, the software
defined radio concept lays new demands on the architecture in order to be able to
provide multi-band, multi-mode operation and reconfigurability, which are needed
for supporting a configurable set of air interface standards.
This report is organised as follows: Chapter 2 discusses the implementation
aspects of software defined radios. The multi-band, multi-mode operation introduces stringent requirements on the system architecture. To achieve the required
flexibility, the boundary of digital processing should be moved as close as possible to the antenna and application specific integrated circuits should be replaced
with programmable processing elements. The exact point where the conversion
between digital and analog waveforms is done depends on the architecture.
The requirement of supporting multiple frequency bands complicates the design of the RF front end and the A/D and D/A converters. The RF front end should
be adjustable or directly suitable for different center frequencies, bandwidths and
other waveform requirements set by the different standards. The choice of the RF
front end architecture depends also on the availability of the A/D and D/A converters. In software defined radios, one of the typical places for the conversions
is between the stages of channel modulation at an intermediate frequency. The
need for reconfigurability restricts the choice of the digital processing platform,
which may be a combination of FPGAs, DSPs and general purpose processors or
a completely new type of computing environment.
Chapter 3 discusses standards related to software defined radios. Standards
are of an enormous importance considering quality, efficiency, compatibility etc.
Currently, wireless communication industry and end users have to deal with the
problems arising from the constant evolution of air interface standards and variations across the world. Software defined radios can be seen as a solution to many
of the problems.
There are several standards bodies relevant to software radios: ANSI, ARIB,
ETSI, IEEE, ISO, ITU, OMG, PCI, TIA, VSO etc. There are standards, for example, for interconnects, analog hardware, buses and backplanes, internetworking
and object oriented architectures. The expanding amount of air interface standards resulted in need to develop multi-mode radios, both for military and for
commercial applications.
Two framework architectures, the SCA and the SWRadio, have been developed. The SCA is the de facto standard for military and commercial software
defined radio development and the SWRadio is the result of an ongoing project
for building an open international industry standard using the SCA as a basis.
Chapter 4 reviews the historical perspective of software defined radio architectures and the current state of the art, by presenting a few of the most influential
projects. In addition, a few other projects related either to research of software de1
fined radio technology or to development of deployable radio sets are presented.
An interesting question is, what the architectures used in these projects have in
common, and whether there is an architecture that has proven to be optimal.
The conclusions summarise this overview of software defined radios in Chapter 5.
2
2
Implementation Aspects
A software defined radio (SDR) consists of, for the most part, the same basic
functional blocks as any digital communication system [19]. Software defined
radio lays new demands on many of these blocks in order to provide multiple band,
multiple service operation and reconfigurability needed for supporting various air
interface standards. To achieve the required flexibility, the boundary of digital
processing should be moved as close as possible to the antenna, and application
specific integrated circuits, which are used for baseband signal processing, should
be replaced with programmable implementations [8].
Functions of a typical digital communication system can be divided into bitstream processing, baseband waveform processing and bandpass processing. The
transmitter of a digital radio can be further divided into an information source, a
source encoder, an encryptor, a channel encoder, a modulator, a digital-to-analog
converter (DAC) and a radio frequency (RF) front end block. Correspondingly,
the receiver consists of an RF front end, an analog-to-digital converter (ADC), a
synchronisation block, a demodulator, a detector, a channel decoder, a decryptor,
a source decoder and an information sink [1, 2]. The exact point where the conversion between digital and analog waveforms is done depends on the architecture.
The converters have been deliberately left out from Figure 2.1. In conventional
radio architectures, the conversion is done at the baseband, whereas in software
defined radios, one of the typical places for the conversion is between the stages
of channel modulation, at an intermediate frequency.
The multi-band, multi-mode operation of an SDR introduces stringent requirements on the underlying system architecture. The requirement of supporting multiple frequency bands affects the design of the RF front end and the A/D and D/A
converters [2]. The RF front end should be adjustable or directly suitable for different center frequencies and bandwidths required by the different standards that
the SDR supports. The front end architectures differ also in suitability considering the waveform requirements of different operating modes. The RF front end
architectures are discussed in section 2.1. The choice of the RF front end architecture depends also on the availability of suitable ADCs and DACs. These data
converters are discussed in Section 2.2. The need for reconfigurability and reprogrammability restrict the choice of the digital processing platform. For instance,
the platform may be a combination of field programmable gate arrays (FPGAs),
digital signal processors (DSPs) and general purpose processors (GPPs). The aspects related to digital processing are discussed in Section 2.3. The reconfigurability and the need for resource management are discussed in Section 2.4. The
chapter is summarised in Section 2.5.
2.1
Radio Frequency Front End
Although an ideal software radio would have a very minimal analog front end,
consisting of an analog-to-digital converter at the antenna, any practical imple3
Information
source
Source
encoder
Encryptor
Channel
encoder
Information
sink
Source
decoder
Decryptor
Channel
decoder
Modulator
Detector
RF front end
Demodulator
RF
front
end
Timing & synchronisation
Figure 2.1: Block diagram of a digital radio system
mentation still needs an RF front end, and the design of a reconfigurable RF part
remains a very complicated issue [4, 2, 7]. The receiver section is more complex than the transmitter and the ADC is the most critical part limiting the choice
of the RF front end architecture [2]. The main functions of the radio frequency
front end are down and up conversion, channel selection, interference rejection
and amplification.
The transmitter side of the RF front end takes the signal from digital-to-analog
converter, converts the signal to the transmission radio frequency, amplifies the
signal to a desired level, limits the bandwidth of the signal by filtering in order to
avoid interference and feeds the signal to the antenna [3].
The receiver side converts the signal from the antenna to a lower center frequency such that the new frequency range is compatible with the ADC, filters
out noise and undesired channels and amplifies the signal to the level suitable for
the ADC. The common part of every receiver architecture apart from fully digital
ones is that the antenna feeds signal trough an RF bandpass filter to a low noise
amplifier (LNA). Automatic gain control (AGC) keeps the signal level compatible with the ADC. Design objectives include achieving a suitable dynamic range
and minimising additive noise while minimising the power consumption. Usually
there has to be a trade-off between power consumption and the dynamic range.
The following subsections present different receiver architectures, i.e. superheterodyne, direct conversion, tuned RF and pure digital receivers. Actual
transceivers have also a transmitter section, which is also based on the single or
the dual conversion architecture. Many of the design challenges concerning the
transmitter are similar to those of the receiver, particularly the power consumption
[2].
2.1.1
Superheterodyne Architecture
The heterodyne receiver has been the most common RF front end architecture
[2, 3]. It was developed in order to overcome inherent disadvantages of direct
conversion receiver, also known as the zero IF or homodyne receiver. In a su4
BPF
LNA
BPF
BPF
AGC
IF LO1
LPF
AGC
ADC
IF LO2
Figure 2.2: Superheterodyne receiver
perheterodyne receiver, the received signal is translated into a fixed intermediate
frequency (IF) that is lower than the center frequency of the RF signal but higher
than the bandwidth of the desired output signal. Often the conversion is done in
two stages because of many advantages of such an architecture: it has lower filtering and quality factor requirements and relaxes the need for isolation between
mixer inputs and the local oscillator. On the other hand, the additional down conversion stage increases power consumption. Each mixer stage needs also an image
filter in order to mitigate interference caused by the mixer. The structure of a two
stage converter is shown in figure 2.2.
A typical heterodyne architecture needs passive frequency dependent components like a dielectric RF filter and surface acoustic wave and ceramic filters in
the IF stages [7]. The bandwidth or center frequency of these filters cannot be
changed. Instead, they are designed according to specific standards. Multiple
front ends or adjustable components are possible solutions but unusable because
of the size and weight. This makes the heterodyne architecture unsuitable for the
wideband RF front end of a software defined radio, at least in handsets. The double conversion receiver is most attractive when the channel spacing is small since
the architecture makes narrow filters possible.
The IF signal can also be processed digitally if the A/D conversion is done
before the last stage of down conversion. In that case, digital post processing
algorithms can be used to analyse and mitigate imperfections caused by the analog
front end [2, 4].
2.1.2
Direct Conversion Architecture
The direct conversion receiver (DCR) needs significantly lower number of parts
and it is conceptually attractive because of the simplicity. Despite its problems,
the DCR concept has gained again attraction as a result of its suitability for use
with multiple standards [7]. In the DCR, the received signal is directly down converted to baseband. The down converted signal is then prefiltered by a variable
frequency anti-aliasing filter and, after analog-to-digital conversion, desired channels are chosen by software filters. Figure 2.3 illustrates the structure of the analog
part of a direct conversion receiver that uses quadrature sampling.
Direct conversion receivers have been so far suitable only for modulation
methods that do not have significant part of signal energy near DC. There are
5
90
BPF
LPF
AGC
ADC
LPF
AGC
ADC
o
LNA
Figure 2.3: Direct conversion receiver with quadrature sampling
also problems associated with the fact that the local oscillator of a DCR is at the
signal band, including possible unauthorised emissions and internal interference.
One of the problems is that phase noise falls within the baseband. Thus, the DCR
architecture needs an extremely stable local oscillator. Some of the problems can
be compensated with digital post processing.
Apart from the possibility to switch between some specific bands and modes,
direct conversion receivers do not offer prominent flexibility. There are some air
interface standards that are very difficult to support by a direct conversion receiver.
On the other hand, the concept has been proven to be commercially usable, for at
least some purposes, by an existing GSM receiver. Some sources suggest that the
DCR is the most promising RF front end architecture for software defined radio
[7].
2.1.3
Tuned RF Receiver
The analog part of the tuned radio frequency receiver is composed of only an
antenna connected to a tunable RF bandpass filter and a low noise amplifier with
automatic gain control [2], as shown in Figure 2.4. The main difficulty of this
architecture is the need for an ADC with a very high sampling rate due to the
wide bandwidth of the RF filter. Additionally, the roll-off factor of the filter has to
be taken into account in order to avoid aliasing. The high sampling rate together
with high dynamic range results in a relatively high power consumption. Demands
for the RF filter are also challenging and practically the filter can only select some
larger band that has to be filtered digitally afterwards to get only the band of the
desired channel. Gain control is also more difficult than in multistage receivers.
In the tuned RF receiver, some of the biggest problems of the direct conversion
are absent [2]. It is suitable for multiple mode receiver that supports different
bands. This makes the architecture well suited for software defined radio.
2.1.4
Other Architectures
Pure digital RF front end architectures provide a potential advantage, i.e. the flexibility of software based solutions that is desired in all parts of software defined
6
BPF
LNA
AGC
ADC
Figure 2.4: Tuned RF receiver
radios. Such a solution puts an ADC at the antenna and does everything else,
including down conversion and filtering, using digital signal processing [4, 2, 3].
The architecture needs A/D conversion and digital processing at very high bandwidths, which results in a high power consumption. Furthermore, the incoming
signal cannot be equalised and as a consequence, error rates are higher. Pure
digital RF processing has yet to see any commercially viable applications.
2.2
A/D and D/A Conversion
Considering the performance and cost of a software defined radio, the analogto-digital converter and digital-to-analog converter are among the most important
components [2]. In many cases, they define the bandwidth, the dynamic range
and the power consumption of the radio. The wideband ADC is one of the most
challenging task in software radio design. The bandwidth and the dynamic range
of the analog signal has to be compatible with the ADC. An ideal software radio
would use data converters at RF, which would result in conflicting needs: a very
high sampling rate, a bandwidth up to several GHz and a high effective dynamic
range while avoiding intolerable power consumption. The physical upper bound
for capabilities of ADCs can be derived from Heisenberg’s uncertainty principle.
For instance at 1 GHz the upper limit for dynamic range is 20 bits or 120 dB
[4]. However, there are other limiting factors including aperture jitter and thermal
effects. Unfortunately, the advances in ADC performance are very slow, unlike in
many other technology areas related to software defined radio.
The Nyquist rate fs /2 determines the maximum frequency for which the analog signal can be faithfully reconstructed from the signal consisting of samples at
the sampling rate fs . Higher frequencies cause aliasing and therefore the ADC is
preceded by an anti-aliasing filter. The number of bits in the ADC defines the upper limit for achievable dynamic range. The higher is the needed dynamic range,
the higher has to be the stop-band attenuation of the filter. For instance, a 16-bit
ADC needs over 100 dB attenuation in order to reduce the power of aliased signal
under the half of the energy of the least sensitive bit (LSB) [4]. The state of the art
ADCs for wireless devices operate at bandwidth of over 100 MHz with 14-bit resolution and 100 dB spurious free dynamic range, but there is already commercial
demand for even better converters for base stations [3].
7
The analog front end of an ADC has a direct influence on the dynamic range.
The non-linearities of the front end cause intermodulation. The spurious free dynamic range (SFDR) denotes the difference between the minimum detectable input (noise floor) and the point where the third order distortion becomes stronger
than that [2]. Different air interface types and standards have different demands
on the dynamic range. A large SFDR is needed to allow recovery of small scale
signals when strong interferers are present.
Different types of receiver architectures need different sampling methods [3].
A superheterodyne receiver or a direct conversion receiver may have an I/Q baseband signal as the analog output, for which quadrature baseband sampling is
needed. Another possibility is an intermediate frequency analog output, for which
a suitable sampling strategy is for example IF band-pass sampling by using a
sigma-delta ADC. Direct sampling is a suitable method for low IF analog signals.
Although the ADC performance is often the limiting factor in the SDR concept and usually more widely discussed in this context, the transmit path is also
a design problem of comparable complexity [3]. The requirements for DACs include high linearity, sufficient filtering and isolation of the clock from the output,
in order to avoid distortion and out-of-band emissions.
The following subsection discusses distortions in converters and related considerations from the point of view of different air interfaces. The subsequent subsections present different sampling methods and converter structures, taking into
account issues related to the suitability for SDRs.
2.2.1
Noise and Distortions in Converters
Distortions in data converters include quantisation noise, overload distortion, linear transfer errors, non-linear errors, aperture jitter and thermal noise [2].
Quantisation noise denotes the unavoidable error caused by the approximation of continuous valued signal by discrete levels, modelled as a noisy source.
Its effects can be reduced by oversampling and noise shaping. Oversampling increases the SNR of the system because a part of the noise power can be removed
by filtering. The latter method is discussed in the end of this subsection.
Overload distortion is caused by input signals exceeding the allowed range
that the ADC can represent. It is difficult to fully avoid overload and, although
overload distortion may significantly reduce SNR, it is sometimes useful to allow some distortion. Lowering the gain reduces the number of bits actually used.
Also, spread spectrum signals have different requirements than non-spread signals
because of the different impact of individual symbol errors on the overall performance. The response time of the automatic gain control is a critical parameter.
Both too slow and too fast response may degrade the performance. The response
should be fast enough to allow full-scale range utilisation while avoiding overloading, but excessively fast response introduces undesired amplitude modulation. In
general, the optimal time constant depends on channel effects and waveform specific aspects [2]. Thus, the optimal automatic gain control strategy depends on
8
the air interface standard used, which has to be taken into account in SDR design.
Avoiding clipping due to overload may effectively use the entire most significant
bit. Therefore, the usable dynamic range may be one or two bits lower than the
resolution of ADC [4].
Offset and gain errors are linear transfer characteristic errors. Non-linearities
can be described with two measures: integral non-linearity denotes the maximum
deviation of the transfer characteristic from the ideal straight line and differential
non-linearity denotes the variation of the distances of quantisation levels from the
desired step size.
Thermal noise is an unavoidable property of resistive components. Software
defined radios work with multiple bandwidths, which results in varying noise
floor. Concerning wideband signals, thermal noise may considerably reduce the
dynamic range.
The uncertainty in spacing between sampling instances is called aperture jitter.
This causes uncertainty in the phase, deteriorates the noise floor performance and
increases intersymbol interference. Glitches are transient differences from the
correct output voltage in DACs. Aperture jitter and glitches are the most important
timing problems of ADCs and DACs.
The effective number of bits (ENOB) of a converter can be calculated from
the signal-to-noise-and-distortion (SINAD) ratio [2].
EN OB =
SIN AD − 1.763dB
6.02
and the SNR of a system in which other effects are negligible compared to aperture
jitter is given by [2]
SN R = −10log10 (2π 2 f 2 ∆a 2 )
where f is the frequency of input signal and ∆a 2 is the variance of the aperture
jitter. By using these equations, an upper limit of the effective number of bits at
different frequencies can be calculated. The limit is shown for one value of the
aperture jitter in Figure 2.5. Thermal noise may also be the dominant limiting
factor, whereas conversion ambiguity may become dominant at high frequencies.
The performance of currently available ADCs lies usually below the line show in
the figure. There are more detailed graphs in [4] and [2].
Dithering is a method that is used to increase spurious free dynamic range [3].
SFDR is essential from the point of view of narrow band air interfaces. Dither
is pseudo random noise that is added to the input of ADC. The main goal is to
maximise the SFDR, while minimising the effect adverse on SNR. Small-scale
dithering is used to decorrelate quantisation noise, which leads to reduction of the
harmonics caused by correlated noise. Dithering may also reduce errors caused by
differential non-linearities. There are two large-scale dithering techniques that are
used to mitigate effects of non-linearities: out-of-band dithering and subtractive
dithering [2]. In out-of-band dithering, noise is added outside the frequency band
9
20
18
Effective number of bits
16
14
12
10
8
6
10
7
8
10
10
9
10
Sampling rate
Figure 2.5: The effect of 0.5 ps aperture jitter on the performance of ADC
of the desired signal. This can be accomplished by using a band-reject filter.
The added noise can be easily filtered out in the digital domain. In subtractive
dithering, digital pseudo noise is converted to the analog domain and added to
input of ADC. Then, the noise is subtracted after the analog-to-digital conversion.
2.2.2
Sampling Methods
Direct sampling, or Nyquist sampling, is based on the sampling theorem, which
requires the sampling rate to be at least twice the highest frequency component
of the analog low-pass signal. In practical implementations, anti-alias filtering is
among the central issues of the converter design. By oversampling, the filtering
requirements may be relaxed. On the other hand, oversampling requires higher
speed ADCs and increases data rate in digital processing [2].
In the case of quadrature sampling, the input signal is split into in-phase and
quadrature components. Their bandwidth is half of the bandwidth of the original
signal. Thus, quadrature sampling reduces the required sampling rate. Correspondingly, the downside is the need for two phase synchronised converters. The
demodulation of phase or frequency modulated signals needs both in-phase and
quadrature samples because these components have different information. By
using digital processing, e.g. Hilbert transform filter, this splitting can also be
performed in the digital domain.
10
RF bands of radio systems have band-pass characteristics instead of low-pass
characteristics. Bandpass sampling, or sub-sampling, also utilises the Nyquist’s
theorem, i.e. the sampling rate has to be at least twice the bandwidth of the input
signal. In this method, images are viewed as frequency translated versions of the
desired spectrum instead of only harmful by-products. It is necessary that information in any Nyquist zone will not interfere with information in other Nyquist
zones. By using this approach, down conversion is also provided by the sampling
process.
2.2.3
Converter Structures
Although there are a lot of different ADCs available, only a few widely used core
architectures exist [3]. The suitability of a particular architecture depends on the
requirements of a given system. Common converter architectures include parallel,
segmented, iterative and sigma-delta structures [2].
A flash converter consists of parallel comparators and a resistor ladder. The
benefits of this architecture are simple design and very low conversion times.
This makes flash converters an attractive choice when only a minimal dynamic
is needed. The complexity of the architecture increases exponentially as the number of bits increases, and 10 bits is the practical upper limit [3]. Another drawback
is that there are difficulties with linearity. Additional bits make this problem even
worse and the main benefit, the high speed, is lost as the effective bandwidth decreases, when more comparators are connected together.
In contrast, multistage converters are scalable, i.e. high speed, high resolution
converters can be constructed. The converted digital signal is converted back an
analog signal by a DAC between each stage. After a subtraction, only the residual
signal is fed to the next stage. Multistage ADCs have many advantages: high
precision without exponential growth of complexity or long delays. However, the
architecture has some challenges. The resolution of the DAC at the first stage has
to be greater than the resolution of the entire ADC.
The sigma-delta ADC consists of an analog filter, a comparator, a DAC and
a decimator with a digital filter, as shown in Figure 2.6. The comparator tells
the output signal if it should be increasing or decreasing. The sigma-delta ADCs
work by using oversampling. An advantage is that they remove quantisation noise
from narrowband signals. In many cases, this architecture is suitable for software
defined radios. Sigma-delta modulators can be used for both direct and bandpass
sampling.
2.3
Digital Processing
Digital processing is the key part of any software defined radio, i.e. the programmable digital processing environment makes it possible to reconfigure to any
air interface. The digital processing segment of an SDR is functionally similar
to other digital communication systems. Differences include that the underlying
11
LNA
+
+
LPF
1−bit
filter & decimator
−
−
n−bit output
DAC
Figure 2.6: Sigma-delta Analog-to-Digital Converter
hardware has to be reprogrammable and there has to be some control software for
handling the reconfiguration.
The following subsections discuss the selection of the processing platform,
techniques related to digital waveform processing and finally the bit-stream section of the SDR.
2.3.1
Selection of the Processing Hardware
The need for reconfigurability necessitates the use of programmable digital processing hardware. The reconfiguration may be done at several levels. There may
be parameterised components that are fixed ASICs, and in the other end, the hardware itself may be totally reconfigurable, e.g. FPGAs. There has to be done a
compromise between programmability, reconfiguration time, processing power,
power consumption, cost, etc.
The most optimised hardware implementation can be done using ASICs but it
is very inconvenient to have a dedicated chip for every operating mode. Digital
signal processors excel in programmability but they can not handle everything, at
least with a tolerable power consumption [9]. FPGAs are often used to do the
most intensive computations. Their reconfiguration time is significantly longer
than the time needed for reprogramming of DSPs and general purpose processors.
It might be desirable that a software defined radio could switch between different operating modes based on channel conditions or other changes in environment.
Customised FPGAs, called configurable computing machines (CCMs), provide
real-time paging of algorithms [9]. They use coarser granularity than traditional
FPGAs. This kind of architectures can be used as stand alone processors or as
co-processors. Low power requirement still remains a problem. There are also
other new approaches at research stage, including application specific instruction
set processors and field programmable functional arrays, which consist of configurable sections specialised for certain tasks. These and a few other approaches are
presented in [9]. In [17], there is proposed a FPGA-macro-based SDR architecture.
12
The total amount of processing capacity sets another limit for implementable
systems. In [4], there are presented a few simple equations for calculating the
needed processing capacity. An illustrative calculation shows that a GSM receiver
requires the capacity of over 40 millions of operations per second (in standardised
MOPS) when the IF processing is excluded. Vanu Inc. has also published a table
containing the number of operations needed by a few air interfaces implemented
in their line of SDR products. For instance, a GSM transceiver needs 96 Mcycles
per second on a Pentium III platform, whereas a 802.11b WLAN receiver requires
512 Mcycles per second [31].
2.3.2
Multirate Processing
Multirate signal processing is needed for many purposes in software defined radios [2]. First of all, the output of the ADC and the input to the DAC is most
conveniently performed at a fixed rate, whereas for digital processing, it is reasonable to use as low data rates as possible without loss of information in order to
avoid excessive computational burden. Also, each supported standard may have
a different symbol rate and it is desirable to use sampling rates that are integer
multiplies of the symbol rate.
Channelisation of data for parallel computing at lower sampling rate may be
useful particularly at base stations since it is a more cost effective method than
using high-speed DSPs. Multirate processing allows also a trade-off between resolution and speed in ADCs, which is useful for supporting different operating
modes of an SDR.
Digital synchronisation of sampling instant is yet another example of applications of sampling rate conversion. The idea is that the input signal is interpolated
to a higher rate and the best sampling point is chosen. Early-late gate synchronisation is a method for determining the location of the peak, i.e. the optimal sampling
point.
The principles of sampling rate conversion can be found for example in [2].
Cascaded integrator comb (CIC) filter is a suitable structure for both interpolation
an decimation in SDRs. It has reduced computational demands and it is also
particularly suitable for FPGA implementation owing to simple basic operations.
2.3.3
Digital Generation of Signals
In software defined radios, the parameters of the system are required to be adjustable, even dynamically during runtime. This lays demands for versatility of
processing hardware at all stages. It is the availability of programmable, fast digital techniques that has made the concept of software defined radio a feasible architecture for implementing radio systems. The synthesis of waveforms is an essential part of any radio communication system. Particularly generation sinusoidal
signal is of a great importance. They are used for many purposes: modulation,
filtering and pulse shaping [2].
13
Direct digital synthesis produces purely digitally signals that are discrete in
time. In comparison to analog methods, the benefits include high accuracy, immunity to noise, ability to generate arbitrary waveforms, low switching time and
the physical size of circuits.
There are a number of approaches for generating digital signals. One of the
basic methods is to store a sampled waveform in a look-up table (LUT). The sampled values are then sent to the output periodically. Sinusoidal signals can be
generated this way. There are three sources of error in direct digital synthesis:
amplitude error due to the limited number of bits used in quantisation, phase truncation due to a limited number of bits used to address the locations of the table,
and the resolution of the DAC. The phase truncation is a significant source of error
and many methods have been developed to cure problem.
There are many approaches to reduce the size of LUT or to avoid spurious
signals caused by phase truncation. Interpolation reduces the required size significantly. There are also sine wave generators that do not need a LUT for generating
a wave of a fixed frequency. These include the CORDIC algorithm and IIR oscillators.
2.3.4
Bandpass Waveform Processing
In software defined radios, the typical placement of wideband ADC and DAC is
before the final IF and channelisation filters. It allows digital processing before
demodulation, and it is a cost effective solution for supporting multiple channel
access standards since IF processing is done using programmable hardware [5].
The bandpass processing segment provides the mapping between modulated
baseband signals and intermediate frequency. In a receiver, a wideband digital
filter selects the band defined by selected standard. IF processing also selects
the desired channel by filtering and down converts the signal into baseband. Because the sample rate is very high, decimation is needed before passing the signal
to baseband processing. A typical application requires about 100 operations per
sample, which results in very high total computational requirements.
Spreading and despreading in CDMA systems are also bandpass processing
functions. Like any other digital processing at IF, they are computationally intensive.
2.3.5
Baseband Waveform Processing
The baseband waveform segment processes digital baseband waveforms. This is
the stage where the first part of channel modulation is done [5]. Pulse modulation
produces a digital waveform from a bit-stream. Pulse shaping is used to avoid
inter symbol interference. Predistortion for non-linear channels may also be done
at the baseband processing stage. In a receiver, soft decision parameter estimation,
if used, is also done in this segment. Digital baseband modulations require also
synchronisation in the receiver.
14
Analog modulation methods may also be emulated using digital waveform
processing with very reasonable computational requirements.
2.3.6
Bit-stream Processing
The bit-stream processing segment of a transmitter handles encryption, multiplexing and coding of the bit-stream, and conversely, the receiver handles corresponding inverse functionality [5].
Different source coded bit-streams are encrypted and channel encoded and
multiplexed into one stream. Forward error control (FEC) consists of channel
coding (convolutional and/or block coding), interleaving and automatic repeat request (ARQ) functionality. Interleaving is needed for efficient use of coding for
error detection and correction.
Bit-stream processing blocks of an SDR are very similar to fixed standard
radios. As an additional requirement, they have to be parameterisable or reprogrammable in order to ensure ability to adapt to needs of different standards. In
military software radios, encryption and other information security functions are
more challenging design problems.
2.4
Reconfiguration and Resource Management
The reconfigurability, which is an essential part of the SDR concept, is a complicated and wide issue. Many aspects related to reconfiguration and resource
management are discussed in [6]. The object oriented representation of resources
is discussed, for instance, in [2]. Object broker technologies are presented in the
next chapter.
The reconfiguration can be done at multiple levels. At an early stage, software
radios were defined as radios whose air interface functionality is reconfigurable
by software. A newer definition would be a radio whose functionality is software
reconfigurable [32].
The reconfiguration can be performed by software download over the air or
by using e.g. wired connections. The reconfigurable functionality may be located
at any of the protocol layers. The low layer reconfigurability allows roaming or
bridging, whereas at the highest layers, reconfigurability enables possibilities for
new, e.g. context aware, services and applications.
In the simplest case, the requirements set by the reconfigurability of SDRs
concern only the user terminals. The previous sections discussed the issues mostly
related to the physical layer. In a more complex case, the system negotiates an optimal configuration based on the environment, the queried services and the capabilities of the user terminal an the network. Thus, the management of network reconfigurability and adaptive protocols are needed. The requirement of on-demand
service data flows with Quality of Service (QoS) parameters is a major reason for
need for parameterisable protocols [6]. The flexible provision of services calls for
open APIs and reconfiguration management.
15
Communication profiles are needed for the management of reconfiguration.
Profiles have to be defined for users, terminals, services and networks [6]. There
has to be also means for monitoring and identification of available air interfaces
and for the management radio resources. One of the challenges of future wireless
systems is finding means for more flexible, dynamic and efficient allocation of
spectrum. Technical and regulatory work is needed for setting rules for more
optimal spectrum management.
2.5
Summary
The implementation of software defined radios is a wide and challenging issue.
The starting point in this chapter was the general structure of digital communication systems. The requirements set by the need for multi-mode operation and
reconfigurability have effects on implementation of various parts of a radio node,
ranging from the selection of processing hardware to the RF front end. There
are a few critical points, such as the analog-to-digital conversion and the power
consumption of many of the components, that limit the choice of physical layer
architecture and, in the end, the achievable performance. The sections of this
chapter discussed these implementation aspects.
SDR has often been seen as a design problem mostly related to the low-level
implementation of a radio node capable of operating in multiple modes. Many
other issues, such as the management of resources and the handling of reconfiguration, suggest that there are also other significant aspects. The next chapter
introduces various standards related to SDRs and the current efforts related to the
standardisation process of frameworks for SDR development.
16
3
Standards
Standards are of an enormous importance considering quality, reliability, efficiency and compatibility. Information technology industry is not an exception:
standards are essential from the point of view of e.g. compatibility, portability of
software components and the development process of products in general.
Currently, wireless communication industry and end users have to deal with
the problems arising from the constant evolution of air interface standards and
different standards in different countries, incompatibilities between wireless networks, and existence of legacy devices. SDRs can be seen as a solution to many
of these problems. On the other hand, SDRs have to conform to an exceptionally
large number of standards due to the multi-mode operation.
There are several standards bodies relevant to SDRs [4]: ANSI, TIA and IEEE
are responsible for interconnect standards, e.g. serial lines and LANs. These
organisations and ETSI define standards for analog hardware, e.g. antennas, RF
connectors and cables. Bus and backplane standards bodies include VSO and PCI.
Organisation responsible for internetworking standards, e.g. TCP/IP and ATM,
include ITU, ISO, ETSI, ARIB, IEEE, TIA and ANSI. OMG and Open Group
define standards for object oriented software.
Section 3.1 discusses the role of air interface standards as a reason for the need
to develop multi-mode radios, from the point of view of military and commercial
applications. Section 3.2 presents various types of hardware standards related
to SDRs. Section 3.3 discusses middleware technologies, which are currently
on the central focus of the most significant SDR projects. Sections 3.4 and 3.5
present two framework architectures for SDRs, i.e. the SCA and the SWRadio.
The SCA is the de facto standard for military and commercial SDR development
and the SWRadio is the result of an ongoing project for building an international
commercial standard based on the SCA. The chapter is summarised in Section
3.6.
3.1
Air Interfaces
In [4], there is an overview of air interface modes and related applications in the
frequency bands in the range of HF through EHF (30 MHz - 300 GHz ).
The intensive role of military organisations in the development of SDR techniques is a result from the fact that there is a huge number of global, national and
regional standards [4]. In the US, the army, the navy, and the air force have had a
great number of incompatible systems, which is a disadvantage considering joint
operations and it also results in excessive costs. For instance, the implementations
of JTRS compliant military SDRs may support over 40 modes.
Especially in the military jargon, air interface modes and standards are called
waveforms, although the support of a radio standard involves also bit-stream processing and the implementation of higher protocol layers of the standard in question. The originally planned and the actually accomplished waveform support of
17
the SPEAKeasy are listed for instance in [21]. The JCIT and multiple JTRS implementations are examples of military SDRs currently in field use. For the JCIT,
the provided modulation formats can be found from [4] and the operating modes
and supported radio system standards are listed in [36]. In Table 1, there is the list
of the currently approved JTRS waveforms.
Table 1: JTRS Waveforms (by priority: KPP / Threshold / Objective) [13]
An example of civilian applications of the SDR concept would be a phone that
supports modes for different areas of world and different generations of mobile
cellular standards. Actually, SDR techniques have already been deployed in base
stations. In Europe, there has not been immediate demand for true multi-mode
mobile phones since the widespread use of GSM has allowed roaming across Europe and many other areas [32], whereas in North America, there are multiple
18
competing digital cellular radio standards. The adoption of the third generation
mobile phone standards may somewhat change the situation.
Commercially used air interfaces supported by future reconfigurable radios
may include mobile cellular standards, audio and video broadcasts, satellite communications, local area networks, wireless local loops etc. Table 2 shows examples of these wireless systems [34]. The commercial demand for SDRs may rise
from the needs of users for roaming seamlessly across networks and getting access to services anywhere without paying attention to the underlying technology,
which is queried by the services [34, 35].
Indoor
W-LAN
Bluetooth
DECT
3.2
Table 2: Examples of wireless systems
Personal
Wireless
Cellular
Broadcast
local loop
PANs
WFA
GSM
DVB-T
Ad Hoc MWS
EDGE
DVB-H
Networks
Body
xMDS
UMTS
DAB
LANs
Satellite
DVB-S
Satellite
broadband
S-UMTS
Hardware
Even though one of the main goals of the SDR concept is to perform as many
radio functions as possible in the programmable digital domain, i.e. in software,
hardware standards still play a considerable role from the point of view of modularity. Ideally, different vendors should be able to design hardware modules using
standard interfaces.
For physically connecting separate hardware elements of the radio system, a
number of standardised buses can be used, for example VME, PCI, cPCI, PC-104,
IEEE-1394 and Ethernet [4, 11]. For instance, the SPEAKeasy Phase I used the
VME bus, while the Phase 2 used the PCI bus. A radio peripheral designed for the
GNU Software Radio [37] uses USB2 for connecting to the PC that performs most
of the digital processing. Many of the buses have various physically different or
even non-standard connectors, for instance for different form factors. Thus, signalling within the buses and the possible external connectivity are separate issues.
The VME is also a chassis standard. The standardised mechanical specifications become important when commercial of-the-shelf (COTS) components are
used. Use of COTS components has become preferable also for military radios, in
order to reduce acquisition, operation and support costs and to gain upgradeability [11]. A radio node may have serial and network interfaces. Possible physical
interfaces include for example RS-232, RS-422, RS-423, RS-485, Ethernet and
802.x [11]. At least base stations and military radios may also need different antennas or other RF components for supporting a wide range of frequency bands
19
and operating modes. There are standards for the required connectors, waveguides, cables etc.
Of course, the handsets of wireless cellular systems have demands different
from base stations or physically large military radios. At least currently, there
are no practical possibilities to add daughterboards or any other functional units,
apart from memory cards, after manufacturing. Therefore, the hardware standards
may seem less important in this context. Yet, the handsets have often an external
connector for data transfer, and the SDRs requiring reconfigurability may lay new
needs for standardisation. Certainly, there are also a lot of other standards related
to e.g. electronic circuit and board design, but they are usually unspecific to SDRs.
For the processing needs of SDRs, there are not yet even de facto standards
and in the most significant SDR projects, such as [11], there is a hardware abstraction layer for maximising independence from the underlying hardware. The
following section presents an object oriented method for the management and interconnection hardware elements in heterogeneous processing environments. Actually, there is still a lack of high level tools for describing the systems and then
automatically generating code, especially concerning the partitioning of the processing tasks into parts suitable for the heterogeneous processing environments of
SDRs, which include software and reconfigurable hardware [8]. There may be a
need for standardised procedures for also this kind of tasks.
3.3
Middleware
In the context of computer networks, the term middleware is used to denote the
core set of functions that enable easy use of communication services and distributed application services [33]. In other words, it provides means for management of applications or services, the mapping of names to objects that provide
them, connection control, etc. In mobile communications, the middleware may
have functions for link monitoring and notifications to user or components of
significant events. The middleware is also one of the parts that is essential for
seamless use of services when multiple wireless standards are used.
Object oriented concepts can be used for partitioning of both software and
hardware. This practice provides the broadest reusability and portability. It is especially advantageous for software defined radios since reconfigurability makes
object oriented techniques and independence of the actual platform used essentially necessary.
The JTRS military radio development program chose OMG’s object management technologies for its framework for SDRs, called Software Communications
Architecture (SCA). The JTRS is discussed in the next chapter and the SCA is
treated in more detail at the end of this chapter.
The Object Management Group (OMG) is an open membership, non-profit
consortium that produces and maintains specifications for interoperable applications [10]. There are hundreds of members in the OMG, including most of the
large companies in computer industry. The next subsections introduce several
20
Object implementation
Client
IDL stub
IDL skeleton
Request
Object Request Broker
Figure 3.1: A request from client to implementation using CORBA
OMG’s specifications, by using the definitions from OMG [10, 18] and the SCA
Developer’s Guide [12]. CORBA is the OMG’s middleware that is used in the
SCA, and the other specifications are needed for utilising the middleware for development of systems with this architecture. The OMG’s own specification for
SDR development, i.e. the SWRadio, uses OMG’s Model Driven Architecture.
3.3.1
Model Driven Architecture (MDA)
The OMG Model Driven Architecture defines a model-based development approach to software development. The main objective of the MDA is to enable the
portability and reuse of models across different technology platforms. Software
development in the MDA starts with a Platform-Independent Model (PIM) of an
application’s functionality and behaviour, typically built in the UML. This model
remains stable as technology evolves. MDA development tools, available now
from many vendors, convert the PIM first to a Platform-Specific Model (PSM)
and then to a working implementation on virtually any middleware platform: Web
Services, XML/SOAP, EJB, C#/.Net, OMG’s own CORBA, or others. Portability and interoperability are built into the architecture. OMG’s industry-standard
modelling specifications support the MDA.
3.3.2
Common Object Request Broker Architecture (CORBA)
CORBA is open, vendor independent infrastructure that provides platform independent programming interfaces and models for portable distributed computing
applications. It is particularly suitable for the development of new applications
and their integration into existing systems, due to independence from program21
Client
IDL stub
Object implementation
Object implementation
Client
IDL stub
IDL skeleton
IDL skeleton
IIOP
Object Request Broker 1
protocol
Object Request Broker 2
Figure 3.2: Interoperability of CORBA ORBs
ming languages, computing platforms and networking protocols.
A CORBA object is a virtual entity that is capable of being located by an object request broker (ORB) and having client requests invoked on it. It is virtual in
the sense that it does not really exist unless it is made concrete by an implementation written in a programming language. A target object, within the context of a
CORBA request invocation, is the CORBA object that is the target of that request.
A client is an entity that invokes a request on a CORBA object. A server is an
application in which one or more CORBA objects exist.
A request is an invocation of an operation on a CORBA object by a client, as
shown in Figure 3.1. An object reference, also known as an IOR (Interoperable
Object Reference) is a handle used to identify, locate, and address a CORBA
object. A servant is a programming language entity that realises (i.e., implements)
one or more CORBA objects. Servants are said to be incarnate CORBA objects
because they provide bodies, or implementations, for those objects. Servants exist
within the context of a server application. In C++, servants are object instances of
a particular class.
In order to invoke the remote object instance, the client first obtains its object
reference. When the ORB examines the object reference and discovers that the
target object is remote, it routes the invocation out over the network to the remote
object’s ORB, as shown in Figure 3.2. OMG has standardised this process at two
key levels: First, the client knows the type of object it’s invoking and the client
stub and object skeleton are generated from the same IDL. Second, the client’s
ORB and object’s ORB must agree on a common protocol. OMG has defined this
also - it’s the standard protocol IIOP.
3.3.3
Interface Definition Language (IDL)
The OMG IDL is CORBA’s fundamental abstraction mechanism for separating
object interfaces from their implementations. OMG IDL establishes a contract
between client and server that describes the types and object interfaces used by
an application. This description is independent of the implementation language,
so it does not matter whether the client is written in the same language as the
server. IDL definitions are compiled for a particular implementation language by
an IDL compiler. The compiler translates the language-independent definitions
22
into language-specific type definitions and APIs (Application Program Interfaces).
These type definitions and APIs are used by the developer to provide application
functionality and to interact with the ORB.
The translation algorithms for various implementation languages are specified
by CORBA and are known as language mappings. CORBA defines a number
of language mappings including those for C++, Ada, and Java (along with many
others). An IDL compiler produces source files that must be combined with application code to produce client and server executables. Details, such as the names
and numbers of generated source files, vary from ORB to ORB. However, the
concepts are the same for all ORBs and implementation languages. The outcome
of the development process is a client executable and a server executable.
3.3.4
Unified Modeling Language (UML)
UML is a standard modelling language for writing software blueprints. By using
UML, system builders can create models that capture their visions in a standard,
easily understandable way and communicate them to others. It may be used to
visualise, specify, construct and document software systems. The UML is more
than just a graphical language. Rather, behind every part of its graphical notation
there is a specification that provides a textual statement of the syntax and semantics of that building block. For example, behind a class icon is a specification that
provides the full set of attributes, operations (including their full signatures), and
behaviours that the class embodies; visually, that class icon might only show a
small part of this specification.
UML diagrams are used in numerous ways within the SCA, however, the focus is on two of them: to specify models from which an executable system is
constructed (forward engineering) and to reconstruct models from parts of an executable system (reverse engineering).
3.3.5
Extensible Markup Language (XML)
XML is a markup language designed specifically for delivering information over
the World Wide Web. XML’s definition consists of only a bare-bones syntax. By
creating an XML document, rather than using a limited set of predefined elements,
new elements can be created and any names can be assigned to them, hence the
term extensible. Therefore, XML can be used to describe virtually any type of
document, from a musical score to a reconfigurable digital radio.
XML is used within the SCA to define a profile for the domain in which waveform applications can be managed. For SCA, the extensibility of XML is limited
to the SCA-defined Document Type Definitions (DTDs). A DTD provides a list of
the elements, attributes, notations, and entities contained in a document, as well as
their relationship to one another. DTDs specify a set of rules for the structure of a
document. The DTD defines exactly what is allowed to appear inside a document.
23
Figure 3.3: Structure of the software architecture of the SCA [11]
3.4
Software Communications Architecture (SCA)
The Software Communication Architecture is the software architecture developed
by the US Military Joint Tactical Radio System (JTRS) Joint Program Office
(JPO) for the next generation of military radio systems [18]. Currently, various
companies are developing radio systems based on this architecture. It is considered as the de facto standard in the SDR industry.
The SCA is not a system specification and it is intended to be implementation
independent [11]. Instead, it is a set of rules for the development of SCA compliant SDR systems. The SCA is an open framework that enables management and
interconnection of software resources in an embedded distributed computing environment. It is targeted to support commercial components and interfaces. The key
element of the SCA is Operating Environment (OE) which consists of the Core
Framework (CF) and commercial of-the-shelf infrastructure software (POSIX operating system, CORBA middleware services etc). The CF is the core set of open
application layer interfaces and services that are needed by application developers for abstraction of the underlying software and hardware components in the
systems. Waveforms are applications and therefore they are not specified by the
SCA. Likewise, external networking protocols are part of waveform applications
and thereby they are also excluded from the SCA specification.
The structure of the software architecture of the SCA is shown in Figure 3.3.
It can be seen from the figure that the SCA follows the architecture described in
the previous section. The object oriented technology is used also for hardware.
The class structure of the SCA hardware is shown in Figure 3.4. The specialised
hardware supplement to the SCA specifies Hardware Abstraction Layer Connec24
Figure 3.4: Hardware Class Structure of the SCA [11]
tivity (HAL-C) for non-CORBA compliant hardware [22]. Especially high bit-rate
waveforms need specialised hardware.
The SCA has been designed also to meet commercial requirements, in addition
to military needs, and it is expected to become a standard. Standardisation is the
key to acceptance of a technology and therefore the JTRS program is cooperating
with the SDR Forum [15] and the OMG [10]. The SDR Forum is a non-profit
organisation that is dedicated to promoting the development and deployment of
technologies related to SDRs. It has been involved in the development of the SCA,
in order to ensure conformance with commercial requirements, such as avoiding
the overhead caused by military requirements. The SDR Forum is not a standardisation organisation. Therefore, the SCA has been passed to a formal specification
body, i.e. the OMG. Standards organisations maintain liaison relationships with
the OMG.
On the commercial side, one drawback of the architecture is the lack of proper
CORBA support on some of the most common FPGAs and DSPs [25]. However,
there are projects addressing also this issue [26].
3.4.1
Application Layer
User communication functions including digital signal processing in the modem,
link-level protocol processing, network-level protocol processing, routing, external I/O access, security, and embedded utilities are performed by Applications
[11]. They are not defined by the SCA except how they interface to the OE.
Applications are required to use the CF interfaces and services. Direct access to the Operating System is allowed only to the services specified in the SCA
POSIX Profile. Networking functionality, e.g. IP network layer, may also be implemented below the application layer. In that case, the functionality is not limited
to the profile since it is located in the kernel space.
25
Applications consist of Resources and use Devices. Devices are types of Resources that are used as software proxies for actual hardware devices. ModemDevice, LinkResource, SecurityDevice, I/ODevice and NetworkResource are interface extensions of the CF. They implement APIs for waveform and networking
applications. They conform to the functional entities of the SCA Software Reference Model that is based on the PMCS model.
3.4.2
Waveform Development
The API Supplement [22] contains requirements for the development of APIs .
Waveform APIs are located at such interfaces that provide the widest portability.
A common API for all waveforms would be too complicated and large for domains
with limited resources. Thus, there have been defined building blocks for building
the specific APIs [23].
Implementing a SCA-compliant waveform follows defined steps. The SCA
Developer’s Guide outlines the process as a checklist [12]:
1. Identify functionality to be provided by the waveform software
2. Determine which API Service Groups are needed
3. Determine what services are needed beyond the API Service Groups
4. Build UML model of interface
5. Generate IDL from UML model of interface
6. Translate IDL into language-appropriate implementation files
7. Compile code generated in step 6
8. Reverse engineer UML model from language-specific implementation files
(optional)
9. Build UML model of waveform software
10. Generate language-appropriate template files for servant and user software
11. Write servant and user software
12. Write XML for each component
13. Build User Interface (optional)
14. Integrate software and hardware
15. Test resultant application
26
3.4.3
SCA Reference Implementation (SCARI)
Interpretations of specifications can easily limit interoperability between implementations. Therefore, it was useful to develop a reference implementation of the
SCA specifications for clarifying the technical aspects. The reference implementation aims to reduce the level of ambiguity of the SCA specification, to increase
the potential for interoperability, to support understanding of the architecture and
to stimulate the emergence of SDRs by reducing the cost and development time
[14].
The Military Satellite Communications Research (RMSC) group [14] of the
Communications Research Centre (CRC) was contracted by the SDR Forum to
developed an open source reference implementation of the SCA. Thus, RMSC
produced an open implementation of the SCA version 2.1. The available open
source implementation is written in Java.
The mandatory components of the SCA core framework are provided by the
reference implementation, as well as the most used other features, e.g. Core
Framework with the XML Domain Profile, tools that are needed to operate the
radio and simple demonstration of waveform applications.
3.5
SWRadio
The SWRadio is a specification of radio infrastructure facilities. The SWRadio
promotes portability of waveforms across SDRs [18]. The SCA has been used as
a basis for OMG’s work on the SWRadio. The SWRadio specification uses the
OMG’s Model Driven Architecture.
The specification supports an approach where the SWRadio platform provides
a standardised extensible set of software services that abstracts hardware and supports applications, such as waveforms and management applications. In the specification, there is defined a set of platform-independent interfaces. Applications
can be developed and ported onto various implementations. This approach provides a possibility for an open market where waveforms can be produced independently of platforms and their providers.
There is a physical partitioning of the SWRadio specification into three main
chapters: UML profile for SWRadio, and PIM and PSM for the CORBA IDL.
A language for modelling SWRadio elements is defined in the UML profile for
SWRadio by extending the UML language with radio domain specific definitions.
A behavioral model of an SWRadio system, standardised APIs and example component definitions that realise the interfaces are provided by the PIM. The PIM
specification is independent from the underlying middleware technology. For
modelling a software radio system defined in the PIM, UML and its extensions
provided by the UML profile for SWRadio are used. The SWRadio specification
also provides a mechanism for transforming the elements of the PIM model into
the platform specific model for the CORBA IDL.
27
Figure 3.5: SWRadio Layers [18]
3.5.1
SWRadio Platform
The SWRadio Platform consists of several layers, as shown in Figure 3.5. The
layers are [18]:
• Hardware layer that is a set of heterogeneous hardware resources including
both general purpose devices and specialised devices
• Operating Environment layer that provides operating system and middleware services
• Facilities layer that provides sets of services to the application developer
• Application layer that represents the stand-alone capabilities of the radio
set.
There are three types of applications supported by the SWRadio Platform: Waveform applications that are the main focus, management applications and other
applications, such as network and end user applications.
3.5.2
SWRadio Architecture
In the SWRadio Architecture, there are two main concepts: services and waveform layering. Services depend on the provided interfaces. A component can offer
one or more services trough realisation relationships in the platform independent
model. SWRadio vendors may provide services that are required for their platform
or they can acquire services from third party vendors. For waveform functionality
grouping, the specification follows the Open System Interconnection (OSI) model
28
(ISO IS 7498) of the International Standard Organization, which defines that the
communications functions should be structured into a stack of seven layers.
The use of reconfigurable components through standard interfaces and well
defined modules is encouraged by the approach. The specification uses extended
OSI model which allows Management and QoS interfaces to communicate with
any layer. The focus the SWRadio architecture is only on physical and link layers.
3.6
Summary
In general, multiple aspects, such as compatibility, reliability, portability and ease
of development, call for standardisation. In the context of radio systems, the multitude of air interface standards has resulted in need for interoperable, reconfigurable systems. Different applications and systems need different air interface
modes and therefore reconfigurability is the only feasible solution to support a
great number of standards with a single radio set. There are also other emerging motives for reconfigurability, e.g. context aware services. For the hardware
required by these reconfigurable radio systems, there are standards, which were
discussed in this chapter.
A detailed architecture defined for the processing platform of SDRs would
lead to portability problems. Therefore, the focus has been on defining a common middleware that provides abstraction of the software and hardware platforms
and thus endorses portability and modularity. The SCA and the SWRadio are
open SDR framework architectures that make extensive use of object oriented
techniques, i.e. the middleware. They are the key elements leading to SDR standardisation.
The next chapter focuses on the research projects related to the SDRs. Early
projects have proven the viability of the SDR concept and there are projects in
progress, which aim to bring the SDR concept into mainstream radio architectures
by using the industry standard components discussed in this chapter.
29
4
Software Defined Radio Projects
This chapter reviews the historical perspective of the evolution of the SDR architectures and the current state of the art SDRs by presenting a few of the most influential SDR projects. Section 4.1 presents the SPEAKeasy program that proved
the potential of the SDR concept for military radios. Section 4.2 discusses the
ongoing JTRS program, which will replace the hardware intensive military radios
with the more flexible, interoperable SDRs [22]. The program is also developing
an open architecture framework for SDRs, i.e. the SCA. Section 4.3 presents a
few other projects that are either associated to research on SDR related topics or
to the development of SDR sets. Section 4.4 summarises the chapter.
4.1
SPEAKeasy
The SPEAKeasy was a US Department of Defence program whose aim was, in
cooperation with industry, to prove the concept of multi-band, multi-mode software programmable radio operating from 2 MHz to 2 GHz [20]. It was intended
to be able to operate with multiple military radios by employing waveforms that
can be selected from memory, or downloaded from external storage or over-theair (OTA) [19]. The SPEAKeasy was designed as a totally open architecture that
can provide secure connections, interoperability and programmability. The benefits of the architecture include seamless connection of various radios and bridging
between different systems. Military applications include tactical radio systems
as well as voice and data communications to aircraft and onto battlefield. Civilian applications also exist: emergency communications, law enforcement radio
communications and public safety.
The SPEAKeasy program evolved from the earlier technologies of the Air
Force, i.e. Tactical Anti Jam Programmable Signal Processor (TAJPSP) initiated in 1989 and the Integrated Communications, Navigation, Identification and
Avionics (ICNIA) system, which was one of the first systems to use a digital programmable modem, from the late 1970’s [20].
4.1.1
SPEAKeasy Phase I
The main goal of the SPEAKeasy Phase I (1992-1995) was to develop a reconfigurable modem with an open architecture and demonstrate its feasibility. The
objectives were to prove the potential of the SDR to solve interoperability issues and problems related to product lifecycle shortening, due to rapidly evolving
technologies [2, 20]. To achieve this, the addition of new technology had to be
simplified. Related to this, an objective was to form a software architecture that
would support the addition of new waveforms.
The wide bandwidth was divided into three sub-bands with independent RF
channel components feeding the same ADCs, which was an important concept in
the sense that it became a standard procedure used in many SDR projects. Only the
30
Figure 4.1: SPEAKeasy Phase I Architecture [21]
midband, 30 MHz to 400 MHz , was implemented in the feasibility demonstration
[20].
The Phase 1 design included RF up- and down-converters with wide bandwidth, high speed-speed, high dynamic range ADCs, four 40 MHz Texas Instruments C40 digital signal processors and a 40 Mhz RISC information security
(INFOSEC) module called CYPRIS [20]. The INFOSEC module included both
communication security (COMSEC) and transmission security (TRANSEC). The
term COMSEC denotes the functionality for encryption of message data, whereas
the term TRANSEC denotes the support for modulation functionality designed to
protect transmissions from interception, e.g. by frequency hopping. CYPRIS was
programmable and the cryptographic algorithms were implemented in software
(however, in [2], it is mentioned that CYPRIS was not actually used until Phase
2).
The hardware was built into a VME chassis. The VME bus was used for control and there was a specialised high-speed bus for data. A Sun SPARC workstation was used as a part of the user interface. The SPEAKeasy Phase I architecture
is shown in Figure 4.1.
The wide frequency range was divided into three sub-bands with different
analog radio parts. Wideband waveforms would have needed more processing
power than the Phase 1 equipment had, i.e. FFT (e.g. on ASICs) would have
been needed [20]. The generic narrowband waveform support of the SPEAKeasy
Phase I included the following modulation methods: support of non-hopped and
hopped amplitude modulation from 50 Hz to 20 kHz (DSB, USB-SC, LSB-SC),
non-hopped amplitude modulation (ASK, CW), frequency modulation (FM, FSK
with 2-8 tones), phase modulation (MPSK, hopped and non-hopped DPSK and
QDPSK, OQPSK), and 4, 16, 64 and 256 QAM. Data rates are supported up to
20 kbps. For digital modulations, the supported error detection and correction
31
methods are (16, 7) and (31, 15) Reed-Solomon and convolutional codes of K=7,
R=1/2 and T=133 or 171 [19].
The Phase 1 system was first demonstrated in August 1994 to operate with
HAVE QUICK, HF modem, automatic link establishment and SINCARS [19].
Simultaneous frequency hopping transmission on HAVE QUICK and SINCARS
as well as bridging networks that use these waveforms were also demonstrated.
Programmability was also shown by modifying a waveform on two units. At
JWID-95 interoperability demonstration the system was demonstrated on-the-air
[20]. The Phase-1 modem and software performed well but lack of ease of use
remained a disadvantage.
4.1.2
SPEAKeasy Phase II
The most important objective of the SPEAKeasy Phase II was to extend the operational scope from the modem to an open, modular and reconfigurable architecture
for the whole radio. To make the architecture cost-effective, commercial standards
and commercial of-the-shelf components were chosen. The capabilities were supposed to include reprogrammable security, wideband modem and continuous RF
coverage up to 2 GHz [20].
Motorola, the main contractor of the Phase 2, designed a wideband RF transceiver, which reduced distortion caused by the IF processing by using the homodyne design [20]. The signal processing hardware consisted of C40 DSPs supported by FPGAs. A commercial palmtop computer with Windows 95 was used
as the user interface. The SPEAKeasy Phase II architecture is shown in Figure
4.2.
One of the challenges of the Phase 2 was to increase the number of simultaneous conversations, which required quicker reconfiguration of the INFOSEC
module in order to allow fast context switching [20]. The initially used CYPRIS
INFOSEC module had to use context switching between data encryption and generation of hop sequences for transmission security. However, advanced waveforms cannot tolerate the long switching delays. The Advanced INFOSEC Module (AIM), which was designed in order to overcome the problem, consisted of
three 100 MHz, 32-bit RISC processors [20]. As shown in Figure 4.2, resources
are attached to either the red (unencrypted) or the black (encrypted) PCI bus,
which is a typical requirement for military radios. The buses are separate and the
crypto processors (CP) of the INFOSEC services provide the inter-bus communication. The CP is required for each active channel, while the key processor (KP)
of the INFOSEC module is needed only when the channel is set up [20].
The RF subsystem of the Phase 2 architecture could transmit and receive multiple channels simultaneously and in the modem subsystem, the parameters could
be changed and the channels could be reallocated without interrupting the operation of concurrently established channels [20].
The software architecture defined modules, which include RF Control, Modem Control, Waveform Processing etc. This was a significant distinction from
32
Figure 4.2: SPEAKeasy Phase II Architecture [21]
the Phase1 architecture, which was based on functional flows lacking true modularity [2]. The modules communicated over the bus by using a layered protocol
[20] asynchronously without a centralised operating system [2]. The implementation units used the PCI bus. The bus formed the lowest layer of the protocol stack,
i.e. the physical layer. There were three software layers: link layer, communications layer and application layer [20]. The communications layer used the lower
layers for message passing, whereas the communication layer itself detected the
installed resources, established the links as well as performed the queueing and
buffering of the data. The application contained the waveform software that used
the APIs of the lower layer.
The Phase 2 was planned to be a four year project with model-year development versions. Enhanced model-year-1 units were field demonstrated at Army’s
TX-XX-AWE experiment in 1997. They managed to accomplish bridging aircraft HAVE QUICK UHF to Army SINCGARS VHF radios and hand-held LMR
[20]. The waveform for LMR compatibility was developed in less than 14 days
and it was downloaded to the SPEAKeasy units during the demonstration from a
laboratory far away.
The model-year-1 proved to be so successful that it went into production and
the Phase 2 had no chance to continue with further research. Therefore, a part
of the goals remain unaccomplished. The model-year-1 units did not include the
support for the full RF range, wideband waveforms, data gateways and networking [20]. The production units were limited to 20 - 400 MHz and only a few
waveforms were implemented. The speed of the cryptographic processor limited simultaneous connections because there was no opportunity to implement
the AIM. The INFOSEC module should be able to support multiple simultaneous
COMSEC and TRANSEC functions and handle the context switching at different
rates. That remained a problem [20].
4.2
Joint Tactical Radio System (JTRS)
The Joint Tactical Radio System is a family of military software radios. They
are modular, multi-band and multi-mode networked radio systems. Examples
of implementations of the JTRS for different purposes include the Navy Digital
Modular Radio (DMR) [24], WITS [23] by Motorola and the SDR-3000 [27] by
33
Spectrum Signal Processing Inc. and the NRL Software Radio [28], which is an
outgrowth of the JCIT. There is a group of specified domains, e.g. the hand-held
domain and fixed maritime domain, that have different needs. However, the JTRS
architecture ensures interoperability across radios designed for different domains.
The JTRS program is a process consisting of three steps that aim to define,
standardise and implement an architecture for software defined radios. The result
of step 1 was the definition of the base architecture. Step 2 refined the baseline
architecture to the SCA, which will be the basis for future military radios [2]. The
SCA has also been used as a starting point for standardisation process of commercial SDRs, as described in the previous chapter. The next subsection describes the
first two phases, whereas the last two subsections discuss two already deployed
product families.
4.2.1
Background
The Programmable Modular Communications System (PMCS) team suggested
that the US Department of Defence should replace old discrete radio systems with
a single SDR family [25]. The PMCS research program was a successor of the
SPEAKeasy program. By using the knowledge and technology gained from the
SPEAKeasy as a basis, the PMCS program developed a reference model [2]. The
JTRS Joint Program Office (JTRS JPO) is a successor of the PMCS [25]. The
reference model of the PMCS was also adopted by the SDR Forum.
Three consortiums lead by Raytheon, Boeing and Motorola were contracted
to make initial proposals [2]. The Modular Software Defined Radio Consortium
(MRSC), composed of Raytheon, BAE Systems, ITT Industries and RockwellCollins, was contracted in 1999 to develop the JTRS SCA [23]. The MRSC contract integrated validation and development processes of the architecture. Seven
other contracts were awarded to other companies, in order to have a third party
validation and thus, to reduce risk. Each of the MRSC members provided a prototype for validation. For instance, the Raytheon prototype was a 4 channel radio
containing a 2 MHz - 2 GHz RF front end and an implementation of the SCA CF.
A set of waveforms was provided, e.g. VHF-AM, VHF-FM, VHF-ATC, HQ I/II,
UHF DAMA/DASA and HF-ALE ported from the Rockwell prototype.
The contract of Assurance Technology Corporation consisted of developing a
SINCGARS/ESIP waveform and a test suite. The Boieng contract included developing an open-source implementation of OE and CF requirements, validation of a
SCA 2.0 compliant CF on a testbed, and integration studies. Harris was contracted
to build a manpack domain radio and to develop a compliant CF. Motorola evaluated the SCA on their existing WITS and DMR product lines. Rockwell-Collins
validated critical timing issues of the Link-16 waveform. Thales was contracted to
evaluate impact of the SCA on military hand-held radios and to build a compliant
prototype. The Vanu Inc. was contracted to evaluate the technology and variants
of the SCA for handhelds.
34
4.2.2
Architecture
The JTRS program has focused on the common infrastructure software, i.e. the
middleware, instead of a detailed architecture. The were two reasons for this
decision: Firstly, in the SPEAKeasy radios, the infrastructure code comprised one
third of the whole software. Secondly, industry pointed out that portability of
components requires interfaces between radio entities and the platform [25]. The
architecture had to be clearly defined yet flexible, in order to provide extendibility
to new waveforms and hardware by rapid insertion of technology. Thus, the SCA
is the core of the JTRS architecture.
Modular design of both software and hardware allows easy upgrades and replacement of components. Legacy waveforms and new waveforms, like the Wideband Networking Waveform, are implemented in software [22]. The waveform
software is supposed to be common for all implementations in order to ensure interoperability. The latest document of operational requirements includes 33 waveforms that each JTRS implementation should support [23]. The capabilities of the
JTRS are evolutionary in the sense that they can be increased along with technological advancements or when funding allows it.
4.2.3
Wireless Information Transfer System (WITS)
The WITS is Motorola’s JTRS compliant radio based on SDR Forum’s architecture [2]. The architecture of WITS has been built on the Motorola’s long-term
experience on SDRs, from the SPEAKeasy, DMR and JTRS programs and the involvement in the SDR Forum. The WITS-based systems are used by the US Navy
and the product line will also expand to the commercial market. The products
available in 2002 were two and four channel radios that could be linked together
to form a system of 128 independent channels with 108 dBm of sensitivity [2].
The architecture is an instantiation of the JTRS. The software architecture is
based on the SCA, i.e. it is layered and modular. The lowest layer is related
to the abstraction of the hardware modules. The physical entities are mapped
into the hardware modules defined by the architecture, with the exception of the
antenna and amplifiers, which are specified as external devices. The implementation of hardware is mostly composed of Line Replaceable Units (LRUs) that are
connected through a set of Compact PCI (cPCI) buses. The current LRUs include
transceivers, pre-selectors, modems, networking units and INFOSEC modules [2].
Most of the processing units consist of a combination of DSPs and ASICs. The
ASICs are mainly used for wired communication and RF processing. The ORB
supports sharing of the processing capacity of the DSPs located around the system. The INFOSEC module is the Motorola’s AIM, which was described in the
SPEAKeasy section.
Each module, including LRUs and internal ones, has to implement POSIX
APIs, which are used for interfacing with higher layers of the architecture [2].
The existing waveform software does not need any modifications, when a new
piece of hardware is added since all elements have to be POSIX-compliant. The
35
available RF units, which use direct down-conversion, do not support high data
rates, but for the present military applications, the WITS is very suitable and the
capabilities can be expanded [2].
4.2.4
SDR-3000
The SDR-3000 software defined radio transceiver platform is a product family
designed for implementing dynamically reconfigurable, high-performance, cPCIbased SDRs [27]. Hundreds of simultaneous transmit and receive channels with
independent air interfaces are supported. An optional SCA CF is available, but the
platform is not exclusively a JTRS implementation. US DoD chose SDR-3000
development platform, including Version 2.2 of SCA CF developed by Harris
Corporation, as a commercially available JTRS representative hardware set for
waveform development [27].
In addition to the waveforms required for JTRS, the SDR-3000 platform is
able to support various mobile cellular air interface standards, including multiple
2nd generation systems and WCDMA [2]. The SDR-3000 consists of a selection
of modules, such as the PRO-3100 software defined I/O (SDIO) module, the PRO3500 baseband engine and the TM1-3300 transition module. The SDIO processor
module contains four Xilinx Virtex II FPGAs. The baseband processing module
includes two PowerPC G4 processor for modem and coding as well as expansion
slots for additional processing capacity. Differential signalling in the I/O bus at
2.4GB/s is used for achieving the peak rate of the transition module, which is the
A/D and D/A interface within a suitable range for most standard IF frequencies.
It is designed to be capable of achieving a very high SNR performance [27].
4.3
Other SDR Projects
The JCIT is another military SDR, whereas the CHARIOT and SpectrumWare
were academic projects, although funded by DARPA. European projects as well
as the GNU Radio are also discussed in the following subsections.
4.3.1
Joint Combat Information Terminal (JCIT)
The JCIT is a multi-band multi-mode SDR developed by the US Navy Research
Laboratory (NRL) [4]. The JCIT was designed as deployable product for Army
avionics, operating in frequency bands from HF up to 2.5 GHz. The focus of the
design was on the hardware capacity, i.e. the extensive use of FPGA and DSPs.
The radio sets include a huge number of processing elements. A wide variety
of modulation formats and standards are supported [4, 36]. The JCIT program
has also made a significant contribution to the SDR Forum’s architecture, i.e. the
domain manager, which loads waveform objects onto the resources of the system
[4].
36
4.3.2
CHARIOT
The CHARIOT (Changeable Advanced Radio for Inter-Operable Telecommunications) was designed at Virginia Tech during the DARPA GloMo programs [2].
The Virginia Tech’s contribution to the program involved several wireless technologies. The CHARIOT is most closely related to the mobile and hand-held
domain. This domain is specially challenging from the point of view the digital
signal processing capacity, when high data rates are needed. The CHARIOT’s approach to this issue consists of a formalised structure for implementing an SDR,
using reconfigurable hardware [2]. This was challenging since the reconfiguration
time of FPGAs is prohibitevily long [2] and implementation using only DSP processors would be large and expensive [29]. Additionally, the power consumption
is an important factor in this domain. There are three new ideas used in the CHARIOT in order to solve the problems: configurable computing machines, hardware
paging and stream-based processing. These techniques enabled small hand-held
devices to maintain the reconfigurability while providing enough processing capacity for high data rates.
The focus was on the formalised architecture that allowed the use of dynamically reconfigurable hardware in SDRs [2]. The architecture was designed to be
scalable and flexible by using a layered model. The Layered Radio Architecture
for interfacing comprises three layers: the Soft Radio Interface (SRI), the Configuration Layer and the Processing Layer. The SRI Layer handles the external
interfaces as well as control of the Configuration Layer. It also decides which
waveform should be used. The Communications Layer handles the setup of data
and algorithm programming flows, and provides status information to the upper
layer. This layer also sends command messages, which form together the algorithm that is performed, to the Processing layer. The Processing layer performs
the actual computations based on the received commands.
The stream-based processing is used for communication between the layers
and in the Processing Layer. Data and programming information streams consist of packets delivered through the same paths and interfaces. This architecture
lends itself to pipelining [2]. Algorithms are divided into modules, which should
be designed in a such way that they can perform their operation independently.
By using the stream-based processing approach, new processing modules can be
easily added, i.e. system is scalable. This approach also simplifies interface design.
The stream concept can used at multiple levels: a super stream handles the
layers as modules. Control packets change the operation of modules. In order
to improve the processing capacity per physical area, the CCMs were selected to
be used in the modules for the Processing Layer [2]. A CCM is a customised
FPGA that uses a higher level of abstraction, i.e. higher granularity. They use
conventional DSP blocks that may be linked and controlled in order to implement
algorithms. The CCM developed by Virginia Tech is called STALLION. The
STALLION consists of independent reconfigurable functional units. In the run37
time reconfiguration concept, the leading packet of a stream is used to reconfigure
the unit at the head of the stream. This leads to fast distributed reconfiguration
since the streams control the flow independently [2].
4.3.3
SpectrumWare
The SpectrumWare project at MIT utilised the constantly advancing performance
of general purpose processors. An advantage of this processing platform is that
the radio subsystem and the applications use the same hardware and operating
system, which simplifies programming [2]. The development environment, i.e.
a UNIX OS, is widely known and mature. The core of the system consists of
the radio algorithms implemented on a workstation. The I/O between an external
wideband tuner and the workstation was a problem that had to solved.
In typical non-real-time operating system, user-space applications cannot perform even near real-time processing using I/O devices. There are many factors
that make the data transfer delays unpredictable. The SpectrumWare system uses
a modified UNIX OS and DMA-transfers pass data to buffers in kernel-space [2].
The buffers are mapped to user-space by using a virtual memory approach. The
variable delays and low capacity of the standard PCI bus resulted in need to design
a dedicated I/O solution, i.e. the General Purpose PCI I/O (GuPPI). The GuPPI
buffers data between a daughtercard and the workstation, thus relaxing the timing
issues. The transfers are performed using blocks of data.
The Signal Processing Environment for Continuous Real-Time Applications
(SPECtRA) was implemented to allow rapid development of reusable real-time
radio software [2]. The SPECtRA consists of a library of signal processing modules, a set of interface modules and a scripting language for defining SDRs. It
supports several adaptation methods based on environment and user needs. One
of the innovations was to pull data in the processing flow instead of pushing. This
makes multi-rate processing easier and decreases redundant processing.
An experimental system, which implemented a GSM base-station, was built.
In 1998, the project team left to start Vanu Inc. [3]. Vanu has build various
software implementations of waveforms, e.g. cellular standards. The signal processing software is mostly written in a high level language.
SDRs were recently recognised by the FCC as a new category of radios. The
Vanu Software Radio GSM Base Station by Vanu Inc. was the first SDR device
to fulfil the FCC’s certification process [30]. This is a positive sign for the future
of the SDR concept since regulatory issues have been seen as one of the key
challenges [4].
4.3.4
European Perspective: ACTS and IST Projects
In the context of the ACTS (Advanced Communications Technologies and Services) programme, the European Union has funded several R&D projects related
to SDR [33, 32]. Figure 4.3 from [33] shows the coverage and focus of the
38
Figure 4.3: ACTS Projects [33]
projects. There are areas where the coverage has been minimal within the ACTS
programme, i.e. the network and spectrum issues and business models.
Currently the work continues in the scope of the IST (Information Society
Technologies) programme within the EU’s Sixth Framework Programme (20022006). The TRUST project adopted a user-centric perspective by examining the
user requirements in order to identify what is needed to support reconfigurable
radios. The SCOUT project is continuing the research initiated in TRUST on
reconfigurable terminal and network architectures. The research areas of SCOUT
include a number of technical, regulatory and business issues [35].
4.3.5
GNU Radio
GNU Radio is a software project that, combined with a minimal hardware, can
be used for building radios whose waveform processing is defined in software
[37]. The purpose of GNU Radio is to do signal processing needed by different
radios in free software, in order to give software developers an easy access to
electromagnetic spectrum so that they can get an understanding of it and develop
new ways to utilise it. Compared to ordinary hardware radios, GNU Radio offers
reconfigurability just like any other SDR project. Currently only a couple of types
of radio have been implemented but if one has adequate understanding of a radio
system, GNU Radio can be programmed to support it.
GNU Radio consists of Gnu Radio Software and Gnu Radio Hardware. The
hardware required for building a receiver is composed of an RF Front End and
an Analog to Digital Converter. There are no kind of actual specifications of the
hardware. Any suitable components can be used. For low bandwidths, the ADC
can be a PC sound card, and there is also project for a dedicated hardware, called
Universal Software Radio Peripheral (USRP). The basic architecture of the USRP
is shown in Figure 4.4. The FPGA is used for performing the most intensive
processing, i.e. it contains up and down converters, decimators and interpolators,
in order to reduce the bit rate to a level suitable for the USB2 connection.
Gnu Radio Software is organised in a such way that a graph which describes
the data flow in the radio system is handed off to the runtime system for execution. The vertices are signal processing blocks and the edges are the connec39
Figure 4.4: Universal Software Radio Peripheral [37]
tions between them. At the moment, the only fully supported operating system
is GNU/Linux but there is ongoing work at least on Windows and Mac OSX.
The currently available Gnu Radio Software supports only FM radio and HDTV
receiving. The FM radio requires a bandwidth of about 200 kHz which is usually out of the range of PC sound cards. The HDTV support necessitates a better
ADC. With relatively little work, the support could include for example NTSC
television, AM, SSB, Telemetry and HAM packet radio.
4.4
Summary
The SPEAKeasy program was a successful feasibility demonstration of the software reconfigurable radio for military purposes. It also encompassed many important concepts for SDRs, such as the open and modular architecture as well as
the use of reconfigurable hardware. The program has had a number of successors.
The JTRS program focuses on portability of waveforms across radio platforms, interoperability among radios, reuse of common software, use of cost effective commercial components and scalability. To achieve these goals, the JTRS
program has developed the SCA, which was discussed in the previous chapter.
There are several existing implementations of the SCA, e.g. the SCARI, and complete JTRS compliant radio systems, such as the NRL Software Radio, the WITS
and the SDR-3000.
There are also several other SDR projects. A few of them were presented
in this chapter. The JCIT is another military software radio developed by the
40
NRL. Two significant academic projects were presented: the CHARIOT is Virginia Tech’s SDR and the SpectrumWare was developed in MIT until the project
evolved to founding of the Vanu Inc. and a commercial product line. In Europe,
many SDR related projects have been conducted within ACTS and IST technology programmes. The open source community has also launched an SDR project,
i.e. the GNU Radio.
Different radios for various application domains serve different purposes [2].
There has to be done trade-offs that depend on the domain. For example, handheld devices are limited by size and power consumption, whereas fixed station radios may e.g. relax RF front end requirements by employing multiple RF modules
or use high power DSPs. Therefore, none of the architectures proves to be better
than all the others. For instance, the WITS performs well in the main military
domains, while the CHARIOT’s approach is suitable for the low-power hand-held
radios, which need high bit-rates. Nevertheless, at a high level, the architectures
are usually very similar to reference models, such as the PMCS model.
41
5
Conclusions
The software defined radio is a far reaching topic, since it is an all encompassing
solution, for which only imagination limits the capabilities that can be planned.
Thus, in the scope of a short report, only a part of related items can be treated.
A relatively traditional view was chosen, omitting potential future trends like the
Cognitive Radio.
Chapter 2 discussed the implementation issues mainly focusing on the physical layer. The implementation of software defined radios is a wide and challenging
issue. The starting point of the chapter was the general structure of digital communication systems. The requirements set by the need for multi-band, multi-mode
operation and reconfigurability have implications on implementations of various
parts of a software defined radio set, ranging from the selection of processing
hardware to the RF front end.
There are a few critical points: considering the physical implementation, the
analog-to-digital conversion and the power consumption of many of the components are among the most important issues, which limit the choice of physical
layer architecture and eventually the achievable performance
Software defined radio has often been seen as a design problem, mostly related
to the low-layer air interface implementation of a radio node, capable of operating in multiple frequency bands and multiple modes. There are a lot of other
significant tasks, such as the resource management and the handling of reconfiguration, which suggest that the scope of the concept is wider. There was also a
short discussion of these topics in Chapter 2.
Chapter 3 introduced various standards related to software defined radios and
the current efforts related to the standardisation process of frameworks for efficient
development process.
In general, multiple aspects, including compatibility, portability and rapid development cycles, lay demand for standardisation. Considering radio systems, the
great number of air interface standards has resulted in the need for reconfigurable
systems capable of operating together with a wide variety of legacy systems. Different services and communication environments need different modes, thus making the reconfigurability the only feasible solution for integrating a wide range of
applications in a single radio set. There are also other emerging techniques that
need reconfigurability, such as context aware services that dynamically optimise
the air interface.
The framework architectures include the SCA and the SWRadio. They incorporate industry standard object oriented techniques into the processing environment of software defined radios. A detailed architecture defined for the processing platform would lead to weak portability of software modules. Therefore, the
focus has been on defining a common middleware that provides modular abstraction of the software and hardware platforms. The SCA and the SWRadio are open
architectures that extensively use the middleware. They are an essential path to
standardisation.
42
Chapter 4 focused on the research projects related to the software defined radios. Early projects have proven the viability of the concept and there are projects
in progress aiming to bring software defined radios into mainstream radio architectures by using the industry standard components that were discussed in the
chapter.
The SPEAKeasy program was a successful feasibility demonstration of the
software reconfigurable radio for military purposes. It has had a number of successors. The JTRS program focuses on the portability of waveforms across radio platforms, interoperability among radios, reuse of common software, use of
cost effective commercial components and scalability. To achieve these goals that
are common to many projects, the JTRS program has developed the SCA, which
was discussed in Chapter 3. The SDR Forum, which is a non-profit organisation
promoting software defined radio technologies, has also contributed to the SCA.
There are several existing instantiations of the JTRS architecture, including the
WITS and the SDR-3000.
A number of other significant projects were also presented in Chapter 4. These
were the JCIT military radio, and two academic projects: Virginia Tech’s CHARIOT and MIT’s SpectrumWare. One of the main contributions of CHARIOT was
a new kind of a reconfigurable processor, i.e. the configurable computing machine called STALLION. SpectrumWare has evolved to a company called Vanu
Inc. that has a line of software radio products, which use conventional general purpose computer processors instead of more specialised hardware. In Europe, many
projects related to software defined radio have been organised within EU’s technology programmes. The open source community has also had a project called
the GNU Radio.
A few critical issues were identified. The wideband A/D conversion and power
consumption may be problematic in part of the application domains. The complexity of the management of networks, capable of supporting dynamically reconfigurable services, may at least slow down the adoption of some of the ideas.
However, the adopted systems have proven the viability of the concept of software
defined radio. None of the architectures is optimal for all applications. Instead,
different approaches serve different purposes. The problem has been addressed:
the frameworks, i.e. SCA and SWRadio, attempt to promote the portability and
reuse of software across different architectures. The FCC approval of Vanu Software Radio was a positive sign considering the future possibilities of software
reconfigurable radios.
43
Abbreviations
ACTS
ADC
AGC
AIM
ANSI
API
ARIB
ARQ
ASIC
ASK
ATC
BER
CCM
CDMA
CF
CHARIOT
CIC
COMSEC
CORBA
COTS
CP
cPCI
CW
DAB
DAC
DAMA
DCR
DECT
DMR
DPSK
DSB
DSP
DTD
DVB-H
DVB-S
DVB-T
EHF
ENOB
EDGE
ETSI
Advanced Communications Technologies and Services
Analog-to-Digital Converter
Automatic Gain Control
Advanced INFOSEC Module
American National Standards Institute
Application Program Interface
Association of Radio Industries and Business
Automatic Repeat Request
Application Specific Integrated Circuit
Amplitude Shift Keying
Air Traffic Control
Bit-Error Rate
Configurable Computing Cachine
Code Division Multiple Access
Core Framework
Changeable Advanced Radio for Inter-Operable
Telecommunications
Cascaded Integrator Comb
Communication Security
Common Object Request Broker Architecture
Commercial of-the-Shelf
Crypto Processor
Compact PCI
Contiuous Wave
Digital Audio Broadcasting
Digital-to-Analog Converter
Demand Assigned Multiple Access
Direct Conversion Receiver
Digital Enhanced Cordless Telecommunications
Navy Digital Modular Radio
Differential Phase Shift Keying
Double Sideband
Digital Signal Processor
Document Type Definition
Digital Video Broadcast - Handheld
Digital Video Broadcasting over Satellite
Digital Video Broadcasting - Terrestrial
Extremely High Frequency
Effective Number of Bits
Enhanced Data for GSM Evolution
European Telecommunications Standards Institute
44
FEC
FFT
FM
FPGA
FSK
GPP
GSM
GuPPI
HAL-C
HF
ICNIA
Forward Error Control
Fast Fourier Transfer
Frequency Modulation
Field Programmable Gate Array
Frequency Shift Keying
General Purpose Processor
Global System for Mobile Communications
General Purpose PCI I/O
Hardware Abstraction Layer Connectivity
High Frequency
Integrated Communications, Navigation, Identification
and Avionics
IDL
Interface Definition Language
IEEE
Institute of Electrical and Electronics Engineering
IF
Intermediate Frequency
INFOSEC Information Security
IOR
Interoperable Object Reference
ISO
International Standard Organization
IST
Information Society Technologies
ITU
International Telecommunication Union
JCIT
Joint Combat Information Terminal
JPO
Joint Program Office
JTRS
Joint Tactical Radio System
KP
Key Processor
KPP
Key Performance Parameter
LAN
Local Area Network
LMR
Land Mobile Radio
LNA
Low Noise Amplifier
LRU
Line Replaceable Unit
LSB
Least Sensitive Bit
LSB-SC
Lower Sideband - Suppressed Carrier
LUT
Look-Up Table
MAC
Medium Access Control
MDA
Model Driven Architecture
MPSK
M-ary Phase Shift Keying
MOPS
Millions of Operations Per Second
MRSC
Modular Software Defined Radio Consortium
MWS
multimedia wireless systems
OE
Operating Environment
OMG
Object Management Group
OQPSK
Offset Quadrature Phase Shift Keying
ORB
Object Request Broker
OSI
Open System Interconnection
OTA
Over-the-Air
45
PAN
PCI
PMCS
PIM
PSK
PSM
QAM
QDPSK
QoS
RISC
RF
RLC
SCA
SCARI
SDR
SFDR
SINAD
SINCGARS
SNR
SPECtRA
SRI
TAJPSP
TIA
TRANSEC
UHF
UML
UMTS
USB
USB-SC
USRP
VHF
VME
VSO
WFA
WITS
WNW
xMDS
XML
Personal Area Network
Peripheral Component Interconnect
Programmable Modular Communications System
Platform-Independent Model
Phase Shift Keying
Platform-Specific Model
Quadrature Amplitude Modulation
Quadrature Differential Phase Shift Keying
Quality of Service
Reduced Instruction Set Computer
Radio Frequency
Radio Link Control
Software Communications Architecture
SCA Reference Implementation
Software Defined Radio
Spurious Free Dynamic Range
Signal-to-Noise-and-Distortion
Single Channel Ground to Air Radio System
Signal-to-Noise Ratio
Signal Processing Environment for Continuous Real-Time
Applications
Soft Radio Interface
Tactical Anti Jam Programmable Signal Processor
Telecommunication Industry Association
Transmission Security
Ultra High Frequency
Unified Modeling Language
Universal Mobile Telecommunications System
Universal Serial Bus
Upper Sideband - Suppressed Carrier
Universal Software Radio Peripheral
Very High Frequency
Versa Module Europa
VITA Standards Organization
Work Force Administration
Wireless Information Transfer System
Wideband Networking Waveform
Multichannel Multipoint Distribution System
Extensible Markup Language
46
References
[1] R. E. Ziemer, R. L. Petterson, Introduction to Digital Communication, 2nd
edition, Prentice Hall, 2000
[2] J. H. Reed, Software Radio: A modern Approach to Radio Engineering,
Prentice Hall, 2002
[3] Walter Tuttlebee, Software Defined Radio: Enabling Technologies, Wiley,
2002
[4] J. Mitola III, Software Radio Architecture, Wiley, 2000
[5] J. Mitola, “The Software Radio Architecture”, IEEE Communications Magazine, May 1995
[6] M. Dillinger, K. Madani, N. Alonistioti, Software Defined Radio: Architectures, Systems and Functions, Wiley, 2003
[7] H. Tsurumi, Y. Suzuki, ”Broadband RF Stage Architecture for SoftwareDefined Radio in Handheld Applications”, IEEE Communications Magazine, February 1999
[8] Z. Salcic, C. F. Mecklenbrauker, “Software Radio - Architectural Requirements, Research and Development Challenges”, The 8th International Conference on Communication Systems, Volume 2, November 2002
[9] S. Srikanteswara, R. C. Palat, J. H. Reed, P. Athanas, “An Overview of Configurable Computing Machines for Software Radio Handsets”, IEEE Communications Magazine, July 2003
[10] Object Management Group, http://www.omg.org
[11] Software Communications Architecture Specification V3.0, JTRS JPO, August 2004
[12] SCA Developer’s Guide Rev 1.1, Raytheon Company, 2002
[13] JTRS ORD Waveform Extract, Version a, April 2003
[14] Communications Research Centre, “SCARI”, http://www.crc.ca/en/html/
rmsc/home/sdr/projects/scari
[15] Software Defined Radio Forum, www.sdrforum.org
[16] SDR Forum, “Modular Multifunctional Information Transfer System
(MMITS) Task Group”, http://www.sdrforum.org/MTGS/formins.html
[17] P. Ting, B. H. Wang, C. S. Tao et al, An Adaptive Hardware Platform for
SDR, SDR Forum Contribution, 2001
47
[18] PIM and PSM for Software Radio Components, Final Adopted Specification,
OMG, May 2004
[19] R. L. Lackey, D. W. Upmal, “Speakeasy: The Military Software Radio”,
IEEE Communications Magazine, May 1995
[20] P. G. Cook, W. Bonser, “Architectural Overview of the SPEAKeasy System”,
IEEE Communications Magazine, April 1999
[21] W. Bonser, SPEAKeasy Military Software Defined Radio, International Symposium on Advanced Radio Technologies, 1998
[22] JTRS JPO, “Joint Tactical Radio System”, http://jtrs.army.mil/sections/
technicalinformation/fset_technical_sca.html
[23] P. A. Eyermann, M. A. Powell “Maturing the Software Communications
Architecture for JTRS”, Proceedings of the IEEE Military Communications
Conference, vol 1, 2001
[24] B. Tarver, E. Christensen, A. Miller et al, “Digital Modular Radio (DMR) as
a Maritime/Fixed Joint Tactical Radio System (JTRS)”, Proceedings of the
IEEE Military Communications Conference, vol 1, 2001
[25] J. Mitola III, “SDR Architecture Refinement for JTRS”, Proceedings of the
Military Communications Conference, vol 1, 2000
[26] L. Pucker, G. Holt, “Extending the SCA Core Framework Inside the Modem
Architecture of a Software Defined Radio”, IEEE Radio Communications,
March 2004
[27] Spectrum Signal Processing, “SDR-3000 cPCI Software Defined Radio Tranceiver Platform” http://www.spectrumsignal.com/products/
sdr/sdr_3000.asp
[28] Telenetics, Inc.,
SW%20Radio.html
“Software
Radio”,
http://www.telenetics-inc.com/
[29] Mobile and Portable Radio Research Group / Virginia Tech, “Virginia Tech’s
GloMo Effort”, http://www.mprg.org/research/glomo-archive/index.shtml
[30] Vanu, Inc., “The Software in Software Radio”, www.vanu.com
[31] J. Chapin, Overview of Vanu Software Radio, Vanu, Inc, 2002
[32] W. H. Tuttlebee, “Software Radio Technology: A European Perspective”,
IEEE Communications Magazine, February 1999
[33] D. Ikonomou, J. M. Pereira, J. da Silva,“EU funded R&D on Re-configurable
Radio Systems and Networks: The story so far”, Infowin Thematic Issue Mobile Communications, ACTS, 2000
48
[34] Information Society Technologies, “IST - Strategic Objectives - Mobile and
Wireless”, http://www.cordis.lu/ist/so/mobile-wireless/home.html
[35] SCOUT Project, “Smart user-centric communication environment”,
http://www.ist-scout.org/
[36] Assurance Technology Corporation, “Joint Combat Information Terminal
(JCIT)”, http://www.assurancetechnology.com/jcit.asp
[37] Free Software Foundation, “GNU Radio, The GNU Software Radio”,
http://www.gnu.org/software/gnuradio
49
Lemminkäisenkatu 14 A, 20520 Turku, Finland | www.tucs.fi
University of Turku
• Department of Information Technology
• Department of Mathematical Sciences
Åbo Akademi University
• Department of Computer Science
• Institute for Advanced Management Systems Research
Turku School of Economics and Business Administration
• Institute of Information Systems Sciences
ISBN 952-12-1486-4
ISSN 1239-1891