Discrete-time Signals and Fourier series

3
Discrete-time Signals and Fourier series representation
In the previous two chapters, we discussed Fourier series analysis as applied
to continuous-time signals. We saw that the Fourier series can be used to
create an alternate representation of any periodic signal. This representation is
done by summing sine and cosine functions or with complex exponentials of
different amplitudes and phases. Both forms are equivalent. In the previous
two chapters, our discussion was limited to continuous-time (CT) signals. In
this chapter we will discuss Fourier series analysis as applied to discrete-time
signals.
Discrete signals are different from analog signals
Although some signals are naturally discrete such as stock prices, number of
students in a class etc. many electronic signals we work with are sampled
from continuous signals. Examples of sampled signals are voice, music, and
medical/biological signals. These signals are sampled, which means to
measure their amplitude at fixed time intervals, at appropriate rates, to create
a sampled version. Now-a-days, nearly everything that goes through a small
electronic device can be assumed to be a discrete signal. However since not all
signals are discrete by nature, the discrete signals are generated from analog
signals by a process called sampling. This is also known as A/D conversion,
and PCM. The generation of the discrete signal from an analog signal is done
by an instantaneous sampling of the analog signal and creating an array of
sampled voltages.
Discrete vs. digital
There are two very similar sounding concepts related to discrete signals. In
general terms a discrete signal is continuous in amplitude but is discrete in
time. This means that it can have any value whatsoever for its amplitude but
is defined or measured only at a specific time. Hence the term discrete applies
Chap 3 - Discrete-time Signals and Fourier series representation
1|Page
to time and not to the amplitude. For purposes of Fourier analysis, we assume
that the sampling is done at a constant interval of time between the samples.
Hence it is time at which the signal is measured that is discrete and not
necessarily the amplitude.
A discrete signal is often confused with the term digital signal. Although in
common language they are thought of as the same thing, a digital signal is a
special type of discrete signal. Like general discrete signal, it is only defined at
specific time intervals, but its amplitude is constrained to certain values. We
have binary digital signals where the amplitude is limited to only two values,
1,  1 or 0, 1 . A M-level signal can take on just one of M pre-set
amplitudes only. These are examples of discrete signals that have constrained
amplitudes. Hence a digital signal is a specific type of discrete signal with
constrained amplitudes. In this chapter we will be talking about general
discrete signals which include digital signals. Both of these types of signals are
called discrete-time (DT) signals. For all discrete signals, and that includes all
digital signals, it is time that is always discrete. We call a general sampling
time, the sampling instant.
Figure 3.1 – Here is an analog signal that is being sampled. Discrete sampling
collects exact amplitudes at the sampling instant, whereas digital sampling rounds
the values to the nearest allowed value. In (b), the sampling values are limited to
just 2 values, +1 or -1. Hence, each value from (a) has been rounded to either a
+1 or -1 to create a binary digital signal.
We note that creation of a digital signal throws away some information. Hence it is an
approximation of the discrete signal. This process can be thought of as resulting in
signal-to-noise (SNR) degradation. This SNR degradation decreases as the number of
allowed levels is increased and/or time between sampling instants is decreased. As the
level increase, it is obvious that the levels are more likely to capture the true value.
Chap 3 - Discrete-time Signals and Fourier series representation
2|Page
Generating discrete signals
Often there is a need when doing math to distinguish between continuoustime (CT) and discrete-time (DT) signals. The convention is that a discretetime signal is written with square brackets around the time index, n, whereas
the continuous-time signal is written the usual way with round brackets
around time index t. Here is how we write the continuous vs. the discrete
signals.
x(t ) Continuous
x[n]
Discrete
The sample index n is a way of identifying a sample. If we start counting at 1
and n = 303, this means this is x[303] is the 303rd sample measured. The
index n however does not imply any particular time. The real time is an
independent quantity from the index and must be known in order to convert
the index to real time. For example if each sample is measured every tenth of
a second, then 30.3 seconds have elapsed at this point in the signal. This real
time would change depending on the sampling speed, which we will define as
the sampling frequency.
In mathematics, we can create a discrete signal by multiplying a continuous
signal with a comb-like sampling signal, as shown in Figure 3.2(b). The
sampling signal shown in this figure is an impulse train but we will give it a
generic name of p(t). We write the sampled function, x s as a function of
continuous-time as the product of the continuous signal and the sampling
signal.
x s (t )  x(t )p(t )
(3.1)
Chap 3 - Discrete-time Signals and Fourier series representation
3|Page
Figure 3.2 – A continuous-time signal sampled at uniform intervals Ts with an
ideal sampling function. The discrete signal in (c) x[n] consists only of the discrete
samples and nothing else. The continuous signal is shown in dashed line for
reference only. The receiver has no idea what it is. All it sees are the samples.
The time between the samples, or the Sampling time is referred to as Ts , and
the Sampling frequency or rate is defined as the inverse of this sample time.
Ideal sampling
Let’s assume that for p(t) we use an impulse train with period Ts . Multiplying
the impulse train with the signal, as in Eq. (3.1) we get a continuous signal
with non-zero samples of the signal at discrete-time sample points, referred to
by nTs. Hence for a continuous signal, the absolute time is the ordinal sample
number times the time in between each sample. For the discrete signal, the
sample interval is dropped and we only talk about n, the sample number. The
sample time Ts , becomes an independent parameter. Hence if we have two
discrete signals with exactly the same sample values, are the signals identical?
No, because the sampling time may be different.
For a signal sampled with sampling function, p(t), an impulse train, we write
the sampled signal per Eq. (3.1) as
p(t ) =
¥
å d (t - nT )
n=-¥

s
x s (t )  x(t )   (t  nTs )
(3.2)
n
Chap 3 - Discrete-time Signals and Fourier series representation
4|Page
The continuous signal (underlined) can be brought into the summation by
changing its argument to nTs , as shown in Eq. (3.3). The result is then a
discrete-time signal, defined only by its samples. Note we write the discrete
signal in square brackets. N is the total number of samples collected.
N
x[n]   x(nTs ) (t  nTs )
(3.3)
n0
The term x  nTs  with round brackets is considered continuous since it is just
the value of the continuous-time signal at time nTs. The term x  n  however is
discrete, because the index n is integers by definition. The discrete signal x  n 
has values only at points t  nTs where n is the integer sample number. It is
undefined at all non-integers unlike the continuous-time signal. The sampling
time Ts relative to the signal frequency determines how coarse or fine the
sampling is.
Reconstruction of the original analog signal from discrete samples
Why do we sample? We sample a signal for one main reason, to reduce its
bandwidth. The other benefit we get from sampling is that often the signal
processing on digital signals is “easier”. However, once sampled, processed
and transmitted, this signal must then be converted back to its analog form.
The process of reconstructing a signal from its discrete samples is called
interpolation. This is the same thing we do when we plot a function. We
compute a few values at some selected points and then connect those points
to plot the representation of the function. The reconstruction by machines
however is not as straight forward and requires giving them an algorithm
consisting of math that they are able to do. This is where things get
complicated.
First of all, we note that there are two conditions for ideal reconstruction. One
is that the signal must have been ideally sampled to start with, i.e. by an
impulse train such that the sampled values represent the true amplitudes of
the signal. Ideal sampling is hard to achieve but for our purposes, we will
assume it can be done.
The second is that the signal must not contain any frequencies above half of
the sampling frequency. This second condition can be met by first filtering the
signal by an anti-aliasing filter, a filter with a cutoff frequency that is half the
sampling frequency prior to sampling. Or we assume that the sampling
Chap 3 - Discrete-time Signals and Fourier series representation
5|Page
frequency chosen is large enough to encompass all the important frequencies
in the signal. Let’s assume this is done also.
Mathematically, the reconstruction process requires the use of the convolution
function. In intuitive sense, think of convolution as a form of smearing of one
function by another. In Fig. 3.3 we show the result of two convolutions. In (a)
we convolve a narrow square pulse (made up of three impulses) with a square
pulse. The result is a slightly distorted shape in column 3. In (b) we do the
same thing but with a single impulse, (which is representation of ideal
sampling) the result is no distortion, we get the same signal, which is now
relocated at the impulse time. Here we show only one impulse, but imagine a
whole train of these, repeating this process at every nTs seconds, a process
represented by the convolution of the shape function with an impulse train. In
frequency domain, convolution can be thought of as low-pass filtering. Hence
the reconstruction process in frequency domain involves filtering the signal,
however for rest of the discussion we will stay in time domain instead.
Figure 3.3 – Convolving a shape with an impulse function moves the shape
unchanged to a new location.
For the purposes of reconstruction, we chose an arbitrary pulse shape, h(t).
The idea is that we will replace each discrete sample with this pulse shape,
and we are going to do this by convolving the pulse shape with the sampled
signal. We write the sampled signal (in parenthesis) convolved with the an
arbitrary shape, h(t) as
x r (t )
n
x(nT
s ) (t
nTs ) * h(t )
(3.4)
The index r indicates that this is a reconstructed signal. At each sample n, we
multiply the sample (a single value) by h(t) (a little wave of some sort lasting
Chap 3 - Discrete-time Signals and Fourier series representation
6|Page
some time). This convolution in Eq. (3.4) centers the “little wave” at the
sample location. All these arrayed waves are then added in time. (Note that
they are in continuous-time.) Now depending on the h(t) or the little wave
selected, we will get a reconstructed signal which may or may not be a good
representation of the original signal.
Simplifying this equation by completing the convolution of h(t ) with the
impulse train, we write this somewhat simpler equation for the reconstructed
signal.
x r (t )
n
x(nT
s )h(t
nTs )
(3.5)
To examine the possibilities for shapes, h(t), we pick the following three; a
rectangular pulse, a triangular pulse and a sinc function. It turns out that
these three pretty much cover most of what is used in practice. Each of these
“shapes” has a distinctive frequency response as shown in Fig. 3.4. We use the
frequency response to determine the effect these shapes will have on the
reconstructed signal.
Figure 3.4 - We will reconstruct the analog signal by replacing each sample by one
of these shapes on the left: rectangular, triangular or the complicated looking sinc
function. Each has a distinct frequency response as shown on the right.
Case 1: Zero-order hold
Fig. 3.4(a) shows a square pulse. The implication is that we will replace each
sample with a square pulse of amplitude equal to the sample value. This
basically means that the sample amplitude is held to the next sampling instant
in a flat line. The hold time period is Ts . This form of reconstruction is called
Chap 3 - Discrete-time Signals and Fourier series representation
7|Page
sample-and-hold or zero-order-hold (ZOH) method of signal reconstruction.
Zero in ZOH is the slope of the interpolation function, a straight line of zero
slope connecting one sample to the next. We may think this as a simplistic
method but if done with small enough resolution, that is a very narrow
rectangle in time, it can do a decent job of reconstructing the signal. The
shape function h(t) in this case is a rectangle.
h(t )
rect(t
nTs )
(3.6)
The reconstructed signal is now given using the general expression of Eq.
(3.4), where we substitute the [rect] shape into Eq. (3.5).
x r[n]
n
x(nTs )rect(t
nTs )
(3.7)
We show the index as going from [
n
] as the general form. In
Fig. 3.5(a), we see a signal reconstructed using a ZOH circuit. From the
convolution property, the shape is now repeated at each sample.
Figure 3.5 – A zero-order-hold method of reconstructing the original analog
signal. This method results in a stair-step reconstruction. If sample time, Ts is
small compared to the period of the analog signal, the reconstruction can be
pretty good.
Case 2: First order hold (linear interpolation)
In this case, we replace each sample with a little pulse shape that looks like a
triangle of width 2Ts as given by the expression
h(t )
1 t / Ts 0 t Ts
1 t / Ts Ts t 2Ts
0
else
Chap 3 - Discrete-time Signals and Fourier series representation
(3.8)
8|Page
This function is shaped like a triangle and the reconstructed signal equation
from Eq. (3.5) now becomes
t nTs
x s[n]
x(nTs )tri
(3.9)
Ts
n
We see how in Fig. 3.6(b) that now instead of non-overlapping rectangles, we
have overlapping triangles. That’s because we set the width of the triangle as
twice the sample time. This double-width does two things; it keeps the power
the same as the case of the rectangle and it fills the in-between points in a
linear fashion. This is method is called a linear interpolation as we are just
connecting the points. This is called the First-order-hold (FOH) because we
effectively we are connecting the adjacent samples with a line of linear slope.
Why you may ask use triangles when we can just connect the samples?
Machines cannot “see” the samples nor “connect” the samples. Addition is
about all they can do well. Hence this method replaces linear interpolation as
you and I might do visually with a simple addition of displaced triangles. It
also gives us a hint as to how we can use any shape we want and in fact of
any length, not just two times the sample time! This does introduce the idea of
non-causality since the shape of the sample is now going back into the past,
but we can deal with this.
Figure 3.6 – A First-order-hold (FOH) method of reconstruction by replacing each
sample with a triangle of the same amplitude but twice the sample time width.
The summation of overlapping triangles results in a signal that appears to linearly
connect the samples.
Case 3: Sinc interpolation
We used triangles in FOH and that seems to produce a better looking
reconstructed signal than ZOH. We can in-fact use just about any shape we
want to represent a sample, from the rectangle to one composed of a complex
shape, such as a sinc function. A sinc function seems like an unlikely choice
since it is non-causal (as it extends into the past) but it is in-fact an extension
of the idea of the first two methods. Both zero-order and first-order holds are
forms of polynomial curve fit. The first-order is a linear polynomial, and we
Chap 3 - Discrete-time Signals and Fourier series representation
9|Page
continue in this fashion with second-order on up to infinite orders to represent
just about any type of wiggly shape we can think of. A sinc function is an
infinite order polynomial and is the basis of perfect reconstruction. The
reconstructed signal becomes a sum of scaled, shifted sinc functions same as
triangular shape. Even though the Sinc function is an infinitely long function,
it is zero-valued at regular intervals. This interval is set as the sampling
period. Since each sinc pulse lobe crosses zero at only the sampling instants,
the summed signal where each sinc is centered at a different n adds no
interference to other sinc pulses centered at other times. Hence this shape is
considered to be free of inter-symbol interference (ISI).
Figure 3.7 - Reconstructing a signal using sinc functions. We show four sinc
functions added together to reconstruct this discrete signal from its four samples.
Note that they all cross zero at all sampling instants other than where they are
centered. So one sinc pulse does not add anything to the adjacent pulse. The
reconstructed signal is now smoothly varying with exactly correct amplitudes at
each sampling instant.
The equation we get for the reconstructed signal in this case is similar to the
first two cases, with the reconstructed signal summed with each sinc located
at nTs .
h(t )
x r[n]
sinc t
n
nTs
x(nTs )sinc t
nTs
Chap 3 - Discrete-time Signals and Fourier series representation
(3.10)
10 | P a g e
Dealing with non-causality
If we start at n = 0, and we need to represent a sample by a sinc shape that
extends 8 samples in the past and 8 in future, then the first 8 samples will
incorrectly represented. Hence in real signals, some non-informational starting
bits are used to get the process started and these are later discarded.
In Fig. 3.8 we see this process for a longer signal, with each sample being
replaced by a sinc function and the resulting reconstructed signal is compared
to the original signal.
Figure 3.8 – Reconstruction with Sinc pulses means replacing the sample with a
scaled sinc function. The original signal in (c) has been shifted slightly
(artificially) so you can see how good of a job the sinc function does. It has
reconstructed the original analog signal from its n samples in (a) perfectly.
Clearly we can see that the sinc construction does a very good job. How to tell
which method is better? Clearly ZOH has done a rather hatchet of a job. But
to properly assess these methods requires the full understanding of the Fourier
Transform, a topic yet to be covered in Chapter 4. So we will drop this subject
now with recognition that a signal can be perfectly reconstructed using the
linear superposition principal using many different shapes, with sinc function
being one example, albeit a really good one, the one we call the “perfect
reconstruction”.
Chap 3 - Discrete-time Signals and Fourier series representation
11 | P a g e
The sinc function detour
We will be coming across the sinc function a lot. It is absolutely the most
useful and most used piece of mathematical concept in signal processing.
Hence we examine the sinc function in a bit more detail now. In Fig. 3.9, we
see the function plotted in time-domain by Matlab. This form is called the
normalized sinc function. It is a continuous function of time, t and is not
periodic. This form of continuous-time sinc function is given by
h(t )
1
t
sin( t )
t
t
0
0
(3.11)
At the origin its value is set to 1.0. As we see from this equation, the function is zero
for all integer values of t, because in such case the sine of an integer multiple of
is
zero. In Matlab this function is given as: sinc(t). No
is needed as it is already
programmed in. The Matlab plot would yield first zero crossing at 1 and as such
the width of the main lobe is 2 units. We can create any main lobe width by inserting
a variable Ts into the equation Eq. (3.12). The generic sinc function of lobe width T s
(main lobe width = 2 Ts ) is given by
h(t )
sin( t T s )
t Ts
(3.12)
Figure 3.9 – The sinc function in time-domain. The signal is non-periodic.
Its peak value is 1 for the normalized form. Note that the main lobe of the
sinc function spans two times the parameter Ts . in (b) we plot the
Chap 3 - Discrete-time Signals and Fourier series representation
12 | P a g e
absolute value of this function so the lobes on the negative side have
flipped to the top.
Fig. 3.9 shows this function for two different times, Ts 2 and Ts 1 (same
as the normalized case.) The sinc function is often plotted in the second style
as in Fig. 3.9(b) with amplitudes shown as absolute values. This style makes it
easy to see the lobes and the zero crossings. It is the preferred style in
frequency domain, but not in time domain. Note that the function has a main
lobe that is 2 times Ts seconds wide. All the other lobes are Ts seconds wide.
The sinc function has some interesting and very useful properties. First one is
that its integral is equal to 1.0.
sin c(2 t / Ts ) dt
rect(0)
1
The second interesting and very useful property, from Eq. (3.12) is that as Ts
decreases, the sinc function approaches an impulse. This is of course obvious
from Fig. 3.9. A smaller value to Ts means a narrower main lobe. Narrow
main lobe makes the central part impulse-like and hence we note that as
Ts
0 goes to zero, the function approaches the Kroncker’s delta function.
From that we get that a group of uniformly located sincs would approach an
impulse train. In discrete-time, we say that a sinc function when sampled at
integer arguments, is exactly the same as a delta function.
Another interesting property is that the sinc function is equivalent to the
summation of all complex exponential. This is a magical property in that it
tells us how Fourier transform works by scaling these exponentials.
sinc(t)
1
2
ej t d
(3.13)
This property is best seen in Fig. 3.10, which shows what we get when add 90
harmonic complex exponentials together. The signal looks very much like an
impulse train.
Figure 3.10 – In this figure we add 90 consecutive harmonics (f = 1, 2, … 90), and
what we get looks very nearly like an impulse train.
Chap 3 - Discrete-time Signals and Fourier series representation
13 | P a g e
The sinc function, as we shall see later is also the frequency response to a
square pulse. We can say that it is a representation of a square pulse in the
frequency domain. If we take a square pulse (also called a rectangle, probably
a better name anyway) in time domain, then its Fourier series representation
will be a sinc, alternately, if we take a time domain sinc function as we are
doing here, then its frequency representation is a rectangle, which says that it
is tightly limited in bandwidth.
Sampling rate
How do we determine what is an appropriate sampling rate? In Fig. 3.11 we
show an analog signal sampled at two different rates, in (a) the signal is
sampled slowly and in (b) sampled rapidly. At this point, our idea of slow and
rapid is arbitrary.
Figure 3.11 – The sampling rate is an important parameter, (a) analog signal
sampled probably too slowly, (b) probably too fast.
It is obvious by looking at the samples in Fig. 3.10(a) that the rate is not quick
enough to capture all the ups and downs of the signal. Some high and low
points have been missed. But the rate in Figure 3.10(b) looks like it might be
too fast as it is capturing more samples than we may need. Can we get by with
a smaller rate? Is there is an optimum sampling rate that captures just enough
information such that the sampled analog signal can still be reconstructed
faithfully from the discrete samples?
Shannon’s Theorem
There is an optimum sampling rate. This optimum sampling rate was
established by Harry Nyquist and Claude Shannon and others before them.
But the theorem has come to be attributed to Shannon and is called the
Shannon’s Theorem. Although Shannon is often given credit for this theorem,
it has a long history. Even before Shannon, Harry Nyquist (a Swede who
immigrated to USA in 1907 and did all his famous work in the USA) had
Chap 3 - Discrete-time Signals and Fourier series representation
14 | P a g e
already established the Nyquist Rate. Shanon took it further and applied the
idea to reconstruction of discrete signals. And even before Nyquist, the
sampling theorem was used and specified in its present form by a Russian
scientist by the name of V. A. Kotelnikov in 1933. And even he may have not
been the first. So simple and yet so profound, the theorem is very important in
all types of signal processing.
The theorem says:
For any analog signal containing among its frequency content a maximum
frequency of f max , then the analog signal can be represented faithfully by N
equally spaced samples, provided the sampling rate is at least two times f max
samples per second.
Although the theorem is named after Shannon, the sampling frequency is
called by Nyquist’s name. We define the Sampling frequency, Fs as the
number of samples collected per second. For a faithful representation of an
analog signal, the sampling rate Fs must be equal or greater than two times
the maximum frequency contained in the analog signal. We write this rule as
Fs  2 f max
(3.12)
The Nyquist Rate is defined as the case of sampling frequency Fs exactly
equal to 2 times f max . This is also called the Nyquist threshold.
Ts is defined as the time period between the samples, and is the inverse of the
sampling frequency, Fs .
Figure 3.11 – There is no relationship between the sampling period and the
fundamental period of the signal. They are independent quantities.
Chap 3 - Discrete-time Signals and Fourier series representation
15 | P a g e
A practical signal will have many frequencies. In setting up the Fourier series
representation, we define the lowest of all its frequencies, f 0 as its
fundamental frequency. The fundamental period of the signal, T0 is the
inverse of the fundamental frequency as we defined this in chapter 1.
The maximum frequency, f max contained within the signal is used to
determine an appropriate sampling frequency for the signal, Fs . An
important thing to note is that the fundamental frequency, f 0 is not related
to the maximum frequency of the signal. Hence there is also no relationship
whatsoever between the fundamental frequency, f 0 of the analog signal and
the sampling frequency, Fs picked to create a discrete signal from the analog
signal. The same is true for the fundamental period, T0 of the analog signal
and the sampling period Ts . They are not related either. This point can be
confusing. T0 is a property of the signal, whereas Ts is something chosen
externally for sampling purposes.
The Shanon theorem applies strictly speaking to baseband signals or what we
call the low-pass signals. There is a complex-envelope version where even
though the center frequency of a signal is high due to having been modulated,
the signal can still be sampled at twice its bandwidth and be perfectly
reconstructed. This is called the band-pass sampling theorem. We won’t
discuss it here.
Frequency confusion of sampled discrete signal
In Fig. 3.12 (a) we see some discrete samples and in (b) we see that these
points could have come from any of the waves shown that pass through those
same points. In reconstruction, we ask ourselves, which of these waves did
these samples actually come from? This is a conundrum that is always present
with discrete samples unless we know something about the signal ahead of
time. Thankfully since the signals are not coming from outer space or from an
unknown source, we often do know something about them. We know the
approximate range of the frequency transmitted and hence can narrow down
the range of possibilities.
Chap 3 - Discrete-time Signals and Fourier series representation
16 | P a g e
Figure 3.13 - Three signals of frequency 1, 3 and 5 Hz all pass through the same
discrete samples shown in (a). How can we tell which frequency was transmitted?
The samples in Fig. 3.13(a) could have come from any of these three signals.
In fact, they could have come from an infinite number of others which are not
shown. This is a basic and very important ambiguity of the discrete signals.
This effect that many different frequencies can be mapped to the same
samples is called aliasing. This effect underlies all the other confusions we
will see when computing the spectrum of discrete signals. Later we will
discuss how the spectrum of a discrete signal repeats, and it repeats precisely
for this reason that we do not know the real frequency of the signal, so the
result, the Fourier representation, gives us all the possibilities. In Fig. 3.14(b),
we see the spectrum of a signal in (a) and we see that the spectrum seems to
repeat with regularity. This repeating spectrum is a consequence of the
discrete signal frequency ambiguity.
Figure 3.14 – When a continuous signal is sampled with a given sampling
frequency, the spectrum repeats with that same sampling frequency. In (a), the
samples were obtained by sampling the signal with Fs = 20 samples per second,
and we see in (b) the spectrum repeating with 20 Hz. This a consequence of
frequency ambiguity.
Chap 3 - Discrete-time Signals and Fourier series representation
17 | P a g e
Let’s examine a signal of frequency 5 Hz, sampled at a rate of 8 samples per
second as in in Fig. 3.15(a). When the reconstruction is done using sinc
functions, we see get a signal of 3 Hz. Clearly this is wrong since we already
know the answer. It should have been 5 Hz. We used sinc to reconstruct the
signal (which we said is an ideal reconstruction) and we still got the wrong
answer. How can that be?
Figure 3.16 – A 5 Hz signal sampled at 8 samples per second is reconstructed using
sinc reconstruction. The reconstructed signal passes through all the samples
without error but is of the wrong frequency.
Before we look into the reason, let’s give a name to these frequencies that can
fit through the same points. These are called aliases. An alias is an alternate
representation of something that is also valid. Like an equation with many
roots, all these frequencies are theoretically correct representation but in
practice of course not what we want. Just as we can ignore all possible but
out-of-range roots of an equation, we do the same here. These aliases are
equivalent to many roots of an equation. The correct set of frequencies (which
we call the spectrum) are called the principal alias, the desired one, the
correct one.
This ambiguity is caused by the chosen sampling rate which is inadequate. In
Fig. 3.16 we look again at the same signal of frequency 5 Hz from Fig. 3.15,
but now sampled at 16 samples per second, instead of at 8 samples per
second.
Chap 3 - Discrete-time Signals and Fourier series representation
18 | P a g e
Figure 3.16 – A signal sampled at 16 samples per second is reconstructed, since
there so many aliases that fit the same samples, which frequency is the correct
one? Is the one with lowest frequency the correct one? In the first case we see 5/4
or 1.25 cycles of the signal, in the second case we see 11/4 = 2.25 cycles, the third
we see 27/4 = 6 3/4 cycles etc.
On receiving the samples, the D to A converter, like a good mathematician
does a thorough search and gives us several choices. The offered guesses are:
5, 11, 19, 27, 49 Hz etc. This time we see that 5 Hz signal is indeed one of the
choices. In fact it is the lowest one. We do not see a 3 Hz signal as a choice at
all. Why 3 Hz appeared as a choice when sampled at 8 samples per second
(samps) but not when sampled at 16 samps, we shall now see.
In Fig. 3.15 the sample rate is 8 samps and in Fig. 3.15, it is 16 samps. In the
second case the sample rate is larger than 10 Hz which is two times 5 Hz, and
hence above the Nyquist threshold. At this sampling rate, the original
frequency is revealed to be first solution. All the aliases are higher in
frequency than the original.
However, in the case of sampling at 8 Hz, we see that the lowest alias is not
the actual signal. When this happens we can no longer definitively identify the
original frequency. The threshold for the 5 Hz sine wave is a sampling rate of
10 Hz. Below this rate, aliases will appear below the original 5 Hz and fool the
reconstruction process. From this we recognize the usefulness of the famous
theorem: if an analog signal is sampled at least twice the signal frequency, the
D to A converter will pick the actual frequency as one the possible choices. In
fact it will be lowest one of all the choices.
The sampling frequency chosen must be higher than the Nyquist rate, else we
will get the wrong answer.
Chap 3 - Discrete-time Signals and Fourier series representation
19 | P a g e
What actually happens when you under-sample
If a sinusoidal signal of frequency f 0 (since a sine wave only has one
frequency, both its highest and its lowest frequencies are the same.) is
sampled at less than two times the maximum frequency, Fs  2 f 0 , then the
signal that is re-constructed is not the same signal at all. The expression for
all the aliases that can result for such a signal is given by
y(t )  sin(2 ( f 0  mFs )t
(3.13)
Here m is a positive integer satisfying this equation.
( f0
mFs )
Fs 2
This equation is a very important but it is not intuitive. So let’s take a look at
an example.
Example 3.1
Take the signal with f0 = 5 Hz and Fs = 8 Hz or samps. We use Eq. (3.13) to
find the possible aliases. Here are first three (m = 1, 2, 3…) aliases.
m
1 : y(t )
sin(2 (5 1 8)t )
sin(2 3t )
m
2 : y(t )
sin(2 (5 2 8)t )
sin(2 11t )
m
3 : y(t )
sin(2 (5 3 8)t )
sin(2 19t )
The first three alias frequencies are 3, 11, and 19 Hz, all varying by 8 Hz, the
sampling frequency. The significance of m, the order of the aliases is as
follows. When the signal is reconstructed, we again need to filter it by an antialiasing filter to remove all higher order aliases. Setting m =1 implies the
filter is set at frequency of mFs/2 or in this case 4 Hz. For m = 2, the filter
would be set at Fs = 8 Hz. Now typically this filter is set to half of the
sampling frequency. So we only see the frequencies that fit those samples
below this number. Higher order aliases although present are filtered out.
Figure 3.17 which is a spectrum of the reconstructed signal shows the Eq.
(3.13) in action. Each m in Eq. (3.13) represents a shift. For m = 1, the cutoff
point is 4 Hz, which only lets one see the 3 Hz frequency but not 11 Hz or
higher. Note the first set of components are at 3 Hz from the center.
Chap 3 - Discrete-time Signals and Fourier series representation
20 | P a g e
Figure 3.17 - The spectrum of the signal repeats with sampling frequency of 8 Hz.
Only the 3 Hz signal is below the 5 Hz cutoff.
The fundamental pair of components are at +5 and -5 Hz. Now per the
property from Eq. (3.12), this spectrum (the bold pair of impulses at 5 Hz
repeat with a sampling frequency of 8 Hz. Hence the first move of the pair
centered at 0 Hz is now centered at 8 Hz (dashed lines). The lower
component falls at 8 - 5 = 3 Hz and the upper one at 8 + 5 = 13 Hz. The
second shift centers the components at 16 Hz, with lower component at 16 – 5
= 11 Hz and the higher at 16+5 = 21 Hz. The same thing happens on the
negative side. All of these are called alias pairs. They are all there unless the
signal is filtered to remove these.
What happens when you do not under-sample
The sampling theorem states that you must sample at twice the frequency of
maximum frequency in the signal in order to properly reconstruct the signal
from the samples. We saw that the consequence of not doing that is that we
get aliases from Eq. (3.12) at wrong frequencies. But what if we do sample at
twice or greater rate. Does that have an effect and what is it?
Example 3.2
x
.2sin(2 t ) sin(4 t) .7cos(6 t) .4 cos(8 t)
Let’s take this signal as shown in Fig. 3.18(a). The highest frequency is 4 Hz.
But we are told that the amplifier has a non-linearity which spreads the signal
a bit. So just to be on the safe side we sample this signal at 10 Hz and then to
be really safe, we sample it again at 16 Hz. The spectrum as computed by the
Fourier series coefficients (FSC) of the 4 frequencies in this signal is shown in
Fig. 3.19(b). (We have not yet discussed how to compute this discrete
spectrum, but the idea is exactly the same as for the continuous-time case.)
For discrete signals, the Fourier series coefficients repeat with integer multiple
of the sampling frequency Fs . The entire spectrum is copied and shifted to a
new center frequency to create an alias spectrum. This continues forever on
Chap 3 - Discrete-time Signals and Fourier series representation
21 | P a g e
both sides of the principal alias, shown in a dashed box in the center in Fig.
3.17.
Figure 3.18 – A composite signal of several sinusoid is sampled at twice the highest
components. In (b), we see the discrete coefficients (what we call the spectrum)
repeating with the sampling frequency.
Figure 3.18 shows several alias spectrums for the signal when sampled at 10
Hz. The spectrum itself is only 8 Hz wide so there is no overlap but the
spectrums are very close together. In Fig. 3.19, we see the same signal
sampled at 16 Hz, and we see that there is plenty of distance between the
spectrums.
Lowering the sampling rate decreases the spacing between the alias
spectrums. The replications would start to overlap if they are not spaced far
enough apart. If this were to happen we would not be able to separate one
replication from another, making the original signal impossible to reconstruct.
When non-linearities are present, the sampling rate must be higher than
Nyquist threshold to allow the spectrum to spread and not overlap. The same
is true for the effect of the roll-off from the anti-aliasing filter. Since practical
signals do not have sharp cutoffs, some guard band has to be allowed. This
guard band needs to be taken into account when choosing a sampling
frequency.
Chap 3 - Discrete-time Signals and Fourier series representation
22 | P a g e
Figure 3.19 – The higher the sampling frequency, the further apart are the
replications of the spectrum.
If the spectrums do alias or overlap, the effect cannot be gotten rid of by
filtering. Since we do not a-priori have knowledge of the signal spectrum, we
are not likely to be aware of any aliasing if it happens. We always hope that
we have correctly guessed the highest frequency in the signal and hence pick a
reasonably large sampling frequency to avoid this problem.
However, usually we do have a pretty good idea about the target signal
frequencies. We allow for uncertainties by sampling at a rate that is higher
than twice the maximum frequency, and usually much higher than twice this
rate. For example, take audio signals which range in frequency from 20–
20,000 Hz. When recording these signals, they are typically sampled at
44.1 kHz (CD), or 48 kHz (professional audio), or 88.2 kHz, or 96 kHz rates
depending on quality desired. Signals subject to non-linear effects spread in
bandwidth after transmission and have sampling rates of 4 to 16 times the
highest frequency to cover the full spread signal.
Discrete signal parameters
There are important differences between discrete and analog signals. An
analog signal is defined by parameters of frequency and time. To retain this
analogy of time and frequency for discrete signals, we use n, the sample
number as the unit of discrete-time. The frequency however gives us a
problem. If in discrete-time, time has units of sample, then the frequency of a
discrete signal must have units of radians per sample.
The frequency of a discrete signal is indeed is a different type of frequency
than the traditional frequency of continuous signals. We call it Digital
Frequency and use the symbol  to designate it. We can show the
equivalence of this frequency to the analog signal by noting how we write the
two forms of signals.
Chap 3 - Discrete-time Signals and Fourier series representation
23 | P a g e
Analog :
x(t )
Discrete :
sin(2 f0 t )
x[n]
sin(2 f0 n Ts )
The first expression is a continuous signal and the second a discrete signal.
For the discrete signal, we replace continuous-time, t with n Ts . Alternately we
can write the discrete signal as in Eq. (3.14) by noting that the sampling time
is inverse of the sampling frequency. (We always have the issue of sampling
frequency even if the signal is naturally discrete and was never sampled from
a continuous signal. In such a case, the sampling frequency is just the inverse
of the time between the samples.)
x[n]
sin
2 f0
n
Fs
(3.14)
Digital frequency of discrete signals
Now define the Digital frequency,
2 f0
Fs
by this expression.
(3.15)
Substitute this definition of digital frequency into Eq. (3.14) and we get
x[n]
sin( n)
Now we have two similar looking expressions for a sinusoid, a discrete and a
continuous form.
x(t )  sin  t 
x[n ]  sin  n 
(3.15)
The digital frequency
is equivalent in concept to th analog frequency ,
but these two “frequencies” have different units. The analog frequency has
units of radians per second whereas the digital frequency has units of radians
per sample.
We define the fundamental period of a discrete signal as a certain number of
samples, which we will write as N 0 This is equivalent in concept to the
fundamental period of an analog signal, T0 . A discrete periodic signal is
Chap 3 - Discrete-time Signals and Fourier series representation
24 | P a g e
defined as repeating after some N 0 number of samples. In the continuous
domain, a period represents 2 radians. To retain equivalence in both
domains, the N 0 samples hence also cover 2 radians, from which we have
this relationship.
0 N 0  2
The units of
0
(3.16)
are radians/sample and units of N 0 are just samples. To be
periodic we want the result to be 2 or an integer multiple of 2 . The digital
frequency is a measure of the number of radians the signal moves per sample.
And when we multiply it by the fundamental period N 0 , we get an integer
multiple of 2 .
Figure 3.20 – A discrete signal in time domain can be referred to by its
sample numbers, n (1 to N) or by the digital frequency phase advance.
Each sample advances the phase by 2 N radians. In this example, N is 8. In
(a), the x-axis is in terms of real time. In (b) it is in terms of sample
identification number or n. In (c) we note the radians that pass between
each sample such that total excursion over one period is 2 .
There are three ways to specify a sampled signal. In Figure 3.19(a), we show
two periods of a signal. This is a continuous signal, hence the x-axis is
continuous-time, t. Now we sample this signal. Each cycle is sampled with
eight samples, so we show a total of 17 samples, numbered from -8 to +8 in
Fig. 3.17(b). This is the discrete representation of signal x(t) in terms of
samples which are identified by the sample number, n. This is a one way of
showing a discrete signal. Each sample has a number to identify it.
A periodic signal is said to cover 2 radians in one cycle. We can replace the
sample number with a phase value for an alternate way of showing the
discrete signal. In Fig. 3.19, there are 8 samples over each 2 radians or
Chap 3 - Discrete-time Signals and Fourier series representation
25 | P a g e
equivalently a discrete angular frequency of 2 8 radians per sample. This is
the digital frequency,
0
that is pushing the signal forward by this many
radians. Each sample moves the signal further in phase by  / 4 radians from
the previous sample, with two cycles or 16 samples covering 4 radians.
Hence we can label the samples in radians. Both forms, using n or the phase
are equivalent but the second form is more common for discrete signals in text
books, but tends to be more confusing and non-intuitive.
Are discrete signals periodic?
Fourier series representation requires that the signal we are subjecting to
Fourier representation be periodic. So can we assume that a discrete signal
since it is sampled from a periodic signal is also periodic? The answer is
strangely enough, No. Here we look at the conditions of periodicity for a
continuous and a discrete signal.
Continuous :
Discrete :
x(t )
x[n]
x(t
x[n
T)
N]
(3.17)
This expression says that if the values of a signal repeats after a certain
number of samples, N for the discrete case and a certain period of time T for
the continuous case, then the signal is periodic. The smallest value of N that
satisfies this condition is called the fundamental period of the discrete
signal. Since we use sinusoids as basis functions for Fourier analysis, let’s
apply this general condition to a sinusoid. To be periodic, a discrete sinusoid
which is defined in terms of the digital frequency and time sample n, must
repeat after N samples, hence it must meet this condition.
cos
0
n
cos
n
0
N
(3.18)
We expand Eq. (3.18) to examine under which condition the expression is
true.
cos
0
(n
N)
cos(
0
n)cos(
0
N ) sin(
0
n)sin(
0
N)
(3.19)
For this expression to be true, we first need the underlined parts on the RHS
to be equal to 1 and 0.
cos
0
n
cos(
0
n)cos(
0
1
N ) sin(
0
n)sin(
0
N)
(3.20)
0
Chap 3 - Discrete-time Signals and Fourier series representation
26 | P a g e
For these two conditions to be true, we must have
0
N
0
2
2 k
or
k
N
(3.21)
We conclude that a discrete sinusoid is periodic if its digital frequency is a
rational number multiple of 2 . This implies that discrete signals are not
periodic for all values of 0 , nor for all values of N. For example if 0  1 ,
then no integer value of N or k can be found to make the signal periodic per
Eq. (3.21).
We can write the expression for the fundamental period now as
N
2 k
(3.22)
0
The smallest integer k, resulting in an integer N, gives the fundamental
period, if it exists. Hence for k =1, we N = N0.
Example 3.3
What is the digital frequency of this signal? What is its fundamental period?

 2
x[n]  cos  n  
3
 5
Figure 3.21 – Signal of Example 3.3
The digital frequency of this signal is 2 5 because that is the coefficient of
time index n. The fundamental period N 0 is equal to 5 samples which we find
using Eq. (3.22) with k = 1.
N0
2
0
2
2 5
5
Example 3.4
Chap 3 - Discrete-time Signals and Fourier series representation
27 | P a g e
What is the period of this discrete signal? Is it periodic?

 3
x[n]  sin  n  
4
 4
Figure 3.22 – Signal of Example 3.4. The period of this signal is 8 samples.
3
. The fundamental period is equal to
4
2 (k 3)
8 samples
3 4
The digital frequency is
N0
2 k
0
The period is 8 samples but it takes 6 radians to get the same sample values
again. As we see, the signal covers 3 cycles in 8 samples. As long as we get an
integer number of samples in any integer multiple of 2 , the signal is
considered periodic.
Example 3.5
Is this signal periodic?
x[n]  sin(.5n   )
The digital frequency of this signal is .5. Its period from Eq. (3.22) is equal to
N
2 k
4 k
0
Since k must be an integer, this number will always be irrational hence it will
never result in repeating samples. The continuous signal is of course periodic
but as we can see in the Fig. 3.23, there is no periodicity in the discrete
samples. They are all over the place, with no regularity.
Chap 3 - Discrete-time Signals and Fourier series representation
28 | P a g e
Figure 3.23 – Signal of Example 3.5 that never achieves an integer number of
samples in any integer multiple of 2 .
Do discrete signals have harmonics?
What is the relationship of the discrete signal sin[ n] to a continuous signal
sin(kt ) with k an arbitrary integer?
In Fig. 3.24, we plot the discrete and continuous harmonics of a sinusoid for k
= 1, 2, 3. (Chosen arbitrarily.) The sinusoid signal is sampled with Fs = 12.
We set
case.
6
for the continuous case and
0
2 12
6 for discrete
The six continuous and discrete sinusoids are given as;
Continuous signal
Discrete signal
Harmonic 1
x(t )
sin(2 t 6)
x[n]
Harmonic 2
sin(4 t 6)
sin 2
Harmonic 3
sin(6 t 6)
sin 2 2
sin 6 n
6
n
6
n
The indices are t and n. The index t runs continuously from 0 to infinity, while
n takes on only integer values. We plot both the continuous and the discrete
harmonics in Fig. 3.24 using these six equations. On the left are the
continuous harmonics and on the right are the discrete harmonics.
Chap 3 - Discrete-time Signals and Fourier series representation
29 | P a g e
Figure 3.24 - Three analog sinusoids sin(k t) (left) and three discrete sinusoids
sin[k n] (right) and their harmonics. We can distinguish the analog harmonics but
not discrete harmonics. All discrete harmonics look the same.
As k increases, the continuous harmonics increase in oscillations. On the left
column we see harmonics of the discrete signal with digital frequency of
and as k is increased from 1, 2, and 3, the discrete harmonic are all exactly the
same. The harmonics don’t change at all, sort of like a standing wave. It’s as if
you cannot change the looks of a discrete signal by changing the harmonic
index, something we can clearly do with an analog signal. This is an important
point and we will come back to it again when we talk about the basis
functions for Fourier analysis of a discrete signal.
2
6
Discrete harmonics as basis of discrete-time Fourier series
The continuous-time Fourier series (CTFS) is written in terms of
trigonometric functions or complex exponentials. Because these functions are
harmonic and hence orthogonal to each other, both trigonometric and
complex exponentials form a basis set for complex Fourier analysis. The series
coefficients can be thought of as scaling of the basis functions.
We are now going to look at the Fourier series representation for discrete-time
signals using discrete-time complex exponentials as the basis functions.
Chap 3 - Discrete-time Signals and Fourier series representation
30 | P a g e
In previous section we showed that that discrete sinusoidal harmonics of the
form sin( 0n) and sin 2 k
0 n are identical to each other, for any
integer k. The same applies to the complex exponentials. A discrete complex
exponential is written by replacing t in continuous-time domain with n, and
 with the digital frequency  . Now we have these two forms of the CE just
as we wrote the two forms of the sinusoid in Eq. (3.16).
Continuous CE : e jk
Discrete CE : e jk
Note that the units of the exponents of both
expand the discrete form as follows.
e jk
0n
e
jk
0t
0n
e jt and e jn are radians. We
2
n
N
k
0, 1, 2,
(3.23)
This is confusing with both k and n indices present in the exponent. In the
continuous form, we also have both of these variables, k (an integer multiplier
for the frequency) is used to increase the frequency by factor k, whereas t is
just time. Here same k is used to increase the frequency and n is time, same as
t for the continuous case. So the presence of both n and k should be
understood. We give harmonics a general expression like this:
[n]
k
e jk(2
N )n
k
0,1,2 , N 1
(3.24)
All harmonics are a function of time (n in this case) and their frequency is k
2
times the fundamental frequency, 0
N . All of these harmonics repeat
with period N. Index k goes from 0 to N-1. For k = N, we get the same
harmonic as k = 0.
k N
[n]
e j( k
N )(2
e jk(2
N )n
N )n
e jN (2
N )n
0
k
[n]
(3.25)
Example 3.6
Show the harmonics of a discrete exponential of frequency 2 12 if it is
being sampled with a sampling period of 0.25 seconds.
Chap 3 - Discrete-time Signals and Fourier series representation
31 | P a g e
The discrete frequency of this signal is
e
 j0t
, we replace 0 with
6. For an exponential given by
 6 and t with n 4 . ( Ts =.25) We write the
discrete signal as
x[n]  e
 n
 j 24
Let’s plot this signal along with its next two harmonics, which are
e
 
 j  n
 24 
, e


 j   2 k 1 n
 24

and e


 j   2 k 2  n
 24

Why is there only one plot? Simply because the three signals are identical and
indistinguishable just as we see in Fig. 3.24.
Figure 3.25 – Signal of Example 3.6 (a) the imaginary part is a sine wave, (b) the
real part, which is of course a cosine. The picture is same for all integer values of
k.
This example shows that for a discrete signal the concept of harmonic
frequencies as in the CT domain do not lead to meaningful harmonics, both
for sinusoids and the discrete-time exponentials. All harmonics are the same.
But then how can we do Fourier series analysis of a discrete signal if all
exponential orthogonal basis signals are identical? The whole thing is based
on uniqueness of the harmonics and hence the importance their coefficients
being meaningful.
So far we only looked at discrete signals that differ by a phase of 2 .
Although the harmonics obtained this way are harmonic in a mathematical
sense, they are pretty much useless in the practical sense. We call them nondistinct. So where are the distinct harmonics that we can use for Fourier
analysis? We cannot use 0 and or 2 k
0 since they are not distinct, but
Chap 3 - Discrete-time Signals and Fourier series representation
32 | P a g e
now we look instead inside the 2
range from any k
0
to (k
1)
0.
We
find hiding in this range N unique harmonics perfectly suitable for Fourier
analysis.
We look at this condition of periodicity of a discrete signal again.
ek
0n( n
ej
N)
0N
ej
1
0n
For the second statement to be true, the conditions in Eq. (3.21) must be true
here as well. This says two important things about discrete signals.
1. If the digital frequency of a signal is a rational number (a ratio of
integers) times then the discrete signal is periodic, else it is not
periodic and we cannot do Fourier analysis on it.
2. Given a discrete signal of period N, the signal will have only N unique
harmonics. Each such harmonic is given by
[n]
0
[n]
k
2
n
N
2
k
n
N
for k
0,1,
N
1
Increasing k beyond N-1 will give the same harmonic as for k = 0 again.
The second condition shows us how this is different from continuous signals.
For continuous signals, k can go on to infinity. For discrete signals, this is not
so. We have just N unique harmonics and the Fourier analysis must be done
using only these N harmonics and no more.
Example 3.7
Let’s see what happens as digital frequency of a signal is varied just within the
0 to 2 range instead of as integer multiples of 2 . Take this signal
x[n]  e
 2
j
 6

n

Its digital frequency is 2 6 and its period N 0 is equal to 6 samples. We
now know that the signals of digital frequencies 2 6 and 14 6 are exactly the
same. So we will increase the digital frequency by 2 but instead in 6 steps,
each time increasing it by 2 6 so that after 6 steps, the total increase will be
2 as we go from 2 6 to
We do not jump from 2 6 to 14 6 but
instead are moving in between. We can start with zero frequency or from
14 6
Chap 3 - Discrete-time Signals and Fourier series representation
33 | P a g e
2 6 or 2 , as it makes no difference where you start. Starting with 0th
harmonic, if we move in six steps, we get these 6 unique signals.
The variable k, the index of the harmonics, steps from 0 to K  1 . Index n
remains the index of the sample or time. Note that since the signal is periodic
with N 0 samples, K is equal to N 0 .
We can visualize this process as shown in Fig. 3.26 for N = 6. Harmonic 6 and
11 are identical. This is a set of N harmonics (within any 2 range) that are
unique and are the basis set for discrete Fourier analysis.
Figure 3.26 – Discrete harmonic frequencies in the range of
2 m to 2 (m
1) .
In Figure 3.26, we plot the discrete complex exponential from Fig. 3.26 so we
can examine them visually. There are two columns in this figure, with left
containing the real and right the imaginary part, together representing the
complex-exponential harmonic. The analog harmonics are shown in dashed
lines for elucidation.
The discrete frequency appears to increase (more oscillations of the samples)
at first but then after 3 steps (half of the period, N) start to back down again.
Reaching the next harmonic at 2 , the discrete signal is back to where it
started. Further increases repeat the same cycle.
Chap 3 - Discrete-time Signals and Fourier series representation
34 | P a g e
Figure 3.27 – The real and the imaginary component of the discrete signal
harmonics. They are all different.
Let’s take a closer look. The first Figure in Fig. 3.26, shows a zero frequency
harmonic. All real samples are 1.0, since this is a cosine. In (b), the
continuous signal is of frequency 1 Hz, the discrete samples come from
cos  2 6 . In (c), we see a continuous signal of 2 Hz and discrete samples
from cos(2 3) . We see that by changing the phase from (a) to (g) we have
gone through a complete 2 cycle. In (g) the samples are identical to case (a)
yet continuous frequency is much higher. The samples for case (g) look
exactly the same as for case (a) and case (h) looks exactly the same as case
(b) and so on.
Chap 3 - Discrete-time Signals and Fourier series representation
35 | P a g e
Our goal here is to understand the nature of harmonics of a discrete signal.
We need the harmonics as a basis set so we can map the analysis signal. The
problem is that the discrete harmonics are different from the easy to
comprehend harmonic signals of continuous signals.
For a discrete signal we find that unique harmonics come not in multiple of
2 radians added to the digital frequency but instead inside the 0 to 2
range.
These signals are an orthogonal basis set and can be used to create a Fourier
series representation of a discrete signal. The weighted sum of these N special
signals form the discrete Fourier series representation of the signal. Unlike the
continuous-time signal, here the meaningful range of the harmonic signal is
limited to a finite number of harmonics which is equal to the period of the
discrete signal in samples. Hence the number of unique coefficients is finite
and equal to the period N.
The book by Stein Ref [] gives a great visual example of this phenomena.
Imagine we are seeing the spokes of a wagon wheel in a movie. Now a movie
is of course analog reality sampled at 30 frames per second. When the speed
of the stage coach is the same as the frame speed, the spokes appear to be
stationary but when the coach slows down or speeds up, we see the spokes are
either slightly going backwards of forwards. This occurs due to the mismatch
of the sampling speed and the actual frequency of the wheel.
Discrete-time Fourier Series (DTFS) representation
The Discrete-time Fourier series (DTFS) is the discrete representation of a
discrete-time periodic signal by a linear weighted combination of N 0 complex
exponentials.
These orthogonal exponentials exist within just one cycle of the signal with
cycle defined as 2 phase shift. Since the number of harmonics available is
discrete, the spectrum is also discrete, just as it is for a continuous signal. If a
discrete signal is periodic with period N 0 , then there are only K0  N 0
harmonics, and not an infinite number as in continuous-time Fourier series
representation. For example, if we have a signal that is being sampled at 5
samples per cycle, with 0 
2
, then it has just N 0  K0  5 useable
5
harmonics. These are;
e
j
2
n
5
, e
j
4
n
5
, e
j
6
n
5
, e
j
8
n
5
, e
j
10
n
5
Chap 3 - Discrete-time Signals and Fourier series representation
36 | P a g e
Here n is the time index and k ranges from 1 to 5 (shown multiplied by 2 ).
The discrete-time representation of a signal is written as the weighted sum of
these K 0 harmonics. Note that when we talk about the period of the signal,
we refer to it as N 0 . and when we talk about harmonics we refer to it by K 0
although they are equal in magnitude.
K0
N0
We can now write the Fourier representation of the discrete signal x[n] as the
weighted sum of these K 0 harmonics.
(3.27)
Note that the harmonic is given by this expression and only K0 of these are
unique which defines the range.
In Eq. (3.27) C[k ], k  0, 1,
e
k
jk
2
n
N0
 are the complex Fourier series coefficients,
which we discussed in Chapter 2. There are k harmonics, hence you see the
coefficients with index k. The DTFS coefficients of the kth harmonic are exactly
the same as the coefficient for a harmonic that is an integer multiple of mN 0
samples so that:
Ck
Ck
mN0
The kth coefficient is equal to
Ck =
N 0 -1
å x[n]e
- jkW0 n
(3.28)
n=0
The  k  mK0  coefficient is given by
C( k+mK ) =
0
=
N 0 -1
å x[n]e
- j( k+mN 0 )W 0 n
n=0
N 0 -1
å x[n]e
- jkWn - jmN 0W0 n
e
n=0
Chap 3 - Discrete-time Signals and Fourier series representation
37 | P a g e
The second part of the signal is
e
 jm 0 N0 n
e
 jm 2 n
It is equal to 1. (Because the value of the complex exponential at integer
multiples of 2 is equal to 1.0) So we have:
C( k+mN ) =
0
N 0 -1
å x[n]e
- j( k+mK0 )W 0 n
n=0
(3.29)
= Ck
So from the fact that the harmonics repeat, the coefficients also repeat. This is
shown in Fig. 3.13. The spectrum of the discrete signal (comprised of the
coefficients) keeps repeating after every integer multiple of the K 0 samples, or
by the sampling frequency. In practical sense, this means we can limit the
computation to just the first K 0 harmonics. Again, K0 is equal to N0.
This is the reason the discrete Fourier analysis of a sampled signal results in
replicating spectrums. It happens because after the first K 0 harmonics, the
harmonics repeat and so do their coefficients. We get nothing new. This is a
very different situation from the continuous signals, which do not have such
behavior. The continuous-time coefficients are unique for all values of k.
Discrete signal coefficients repeat because as we allow k to vary over all values
of digital frequencies from k to k , yet we can no longer tell the
harmonics apart outside of 2 . We end up looking at the same K 0 numbers
over and over again.
DTFS Examples
Example 3.7
Find the discrete-time Fourier series coefficients of this signal.
 2 
x[k ]  1  sin 
k
 10 
The fundamental period of this signal is equal to 10 samples from observation.
Hence it can only have at most 10 unique coefficients.
Chap 3 - Discrete-time Signals and Fourier series representation
38 | P a g e
Fig. 3.28 – Signal of example 3.7 and its Fourier coefficients (a) The discrete signal
with period = 10, (b) the fundamental spectrum, (c) the true repeating spectrum.
Now we write the signal in the complex exponential form.
æ 2p ö
æ 2p ö
1 jç ÷ n 1 - ç ÷ n
x[n] = 1+ e è 10 ø - e è 10 ø
2j
2j
Note that because the signal has only two frequencies, corresponding to index
k = 0, for zero frequency and k = 1 , which corresponds to the fundamental
frequency, the coefficients for remaining harmonics are zero. We can write the
coefficients as
1
C[k ] 
N0

N 0 1
 x[n]e
 jk 0n
k 0
2
jk
n
1
10
x
[
n
]
e

10 k 0
C[0] 
9
2
jk n
1 9
10
x
[n]
e

10 k 0
1 9
 x[n]
10 k 0
1

Chap 3 - Discrete-time Signals and Fourier series representation
39 | P a g e
In computing the next coefficient, we compute the value of the complex
exponential for k=1 and then for each value of k, we use the corresponding
x[n] and the value of the complex exponential. The summation will give us
these values.
C1 
1 9
1
x[n ]e  j (1) 0n 

10 n 0
2j
C1 
1 9
1
x[n ]e  j ( 1) 0n  

10 n 0
2j
Of course, we can see the coefficients directly in the complex exponential form
of the signal. The rest of the coefficients from C2 to C9 are zero. However, the
coefficients repeat after C9 so that C19 k   C1 for all k. We see this in the
spectrum of the signal in Fig. 3.28(c). In Fig. 3.27(b) we show only the
fundamental spectrum, but in fact the true spectrum is the one in (c). The
spectrum repeats every 10 samples forever.
Example 3.8
Compute the DTFSC of this discrete signal.
x[n]
2.5
3cos[2 n / 5] 1.5sin[2 n / 4]
The period of the second term, cosine is 5 samples and the period of sine is 4
samples. Period of the whole signal is 20 samples because it is the least
common multiple of 4 and 5. This signal repeats after every 20 samples. The
fundamental frequency of this signal is
0 
2 

20 10
Chap 3 - Discrete-time Signals and Fourier series representation
40 | P a g e
Figure 3.29 - Signal of example 3.8 (a) The discrete signal with period = 20, (b)
the fundamental spectrum, (c) the true repeating spectrum.
Cn 
1
N0
N 0 1
 x[n]e
 jk 0n
n 0
 
 jk  n
1 19

x[n ]e  10 

N 0 n 0
C0 
 2 

5 
 j
1
x[0]e 
20
The Fourier series coefficients of this signal repeat with a period of 20. Each
harmonic exponential varies in digital frequency by  20 . Based on this
knowledge, we can see that the 2 5 exponential falls at k
4( / 10)
2 / 5 and exponential l 2 4 falls at k
4,
5, 5( / 10)
/ 2.
We can also write this signal as
x[n]
2.5
2
1.5e 5
n
1.5e
2
5
n
2
j .75e 4
n
.75e
2
4
n
From here, we see that the zero-frequency harmonic has a coefficient of 2.5.
The frequency 2 / 5 has coefficients of 1.5 and frequency 2 / 4 has a
coefficient of .75 as shown in Fig. 3.29.
Chap 3 - Discrete-time Signals and Fourier series representation
41 | P a g e
Example 3.9
Compute the DTFS of a periodic discrete signal that repeats with period 4 and
has two impulses of amplitude 2 and 1.
The period of this signal is 4 as we can see and its fundamental frequency is
0 
2 

4
2
F3-22Example
Figure 3.30 – Signal of example 3.9. The discrete signal with period = 4 (b) the
fundamental spectrum, (c) the true repeating spectrum.
We write expression for the DFSC from Eq.(3.26)
3
Ck = å x[n]e
p
- jk n
2
n=0
Solving this summation in closed form is the hard part. In nearly all such
problems we need to know series summations or the equation has to be solved
numerically. In this case we do know the relationship. We first express the
complex exponential in its Euler form. We know values of the complex
exponential for argument  4 are 0 and 1 respectively for the cosine and sine.
We write it in a concise way as
Chap 3 - Discrete-time Signals and Fourier series representation
42 | P a g e
 
 
cos    0 and sin    1
2
2
We get for the above exponential

e
n k
2

 
  
  cos    jsin   
2
 2 

  j
nk
Now substitute this into the DTFSC equation and calculate the coefficients,
knowing there are only n = 4 harmonics in the signal because the number of
harmonics are equal to the fundamental period of the signal.
C[0]
abs
1
2 1)
4
C[1]
abs
1
2
4
j1)
C[1]
abs
1
2
4
1)
C[1]
abs
1
2
4
j1)
0.25
0.56
0.75
0.56
We see these four values repeated in Figure 3.30(b).
Example 3.10
Find the DFS of the following sequence
x[n]  [0,1,2,3,0,1,2,3,...]
Chap 3 - Discrete-time Signals and Fourier series representation
43 | P a g e
Figure 3.31 - (a) The discrete signal with period = 4, (b) the fundamental
spectrum, (c) the true repeating spectrum.
The fundamental period of this series is equal to 4 by observation. We will
now use a compact form of the exponentials to write out the solution.
Note that this W is not a variable but a constant. Its value for these parameters
is equal to –j. Now we write the coefficients as
3
C[k ]   x[n]W4nk , k  0, 1, 2,
n 0
From here we get
3
C (0)   x[n ]W40.n  0  1  2  3  6
n 0
3
3
n 0
k 0
3
3
n 0
k 0
3
3
n 0
k 0
C (1)   x[n ]W41.n   x[n ]   j   0  j  2  3 j  2  2 j
n
C (2)   x[n ]W41.n   x[n ]   j   2
2n
C (3)   x[n ]W41.n   x[n ]   j   2  2 j
3n
The absolute values of these coefficients normalized by N = 4, are plotted in
Fig. 3.31(b) and in the repeating form in (c).
Chap 3 - Discrete-time Signals and Fourier series representation
44 | P a g e
Now we come to the square pulse which we have been covering in each
chapter.
DTFS of a repeating square pulse signal
Find the discrete-time Fourier series coefficients of a square pulse signal of
width L samples and duty cycle L/N.
Here N is the period of the pulse and L is the width of the pulse itself.
We give a definition to this pulse signal as
1 mN  n  mN  L  1
x[n]  
m  0, 1, 2,
0 mN  n  (m  1) N  1
(3.30)
To compute the DTFSC, we will use this important property of geometric
series. The summation of such a series is given by

a
n 0
n

1
,
1 a
a 1
(3.31)
This is a very important property and will be utilized often in this book.
Fig. 3.32- (a) The discrete signal, (b) discrete-time Fourier series (DTFSC)
coefficients, (c) the true repeating spectrum
Chap 3 - Discrete-time Signals and Fourier series representation
45 | P a g e
We write the coefficients of this signal as
We can use the geometric series of Eq. (3.30) to express the last row as
n  0,  N ,  2 N ,
L


2
Ck  1  e  j N Ln

 j 2 n
 1 e N
else
This last term can be simplified as follows
1 e
 j 2N Ln
1 e


j 2N
n

e
 j 2N Ln
e

j 2N
n
e
e
j 2N Ln
j 2N
n
e
 j 2N Ln
e
j 2N

n


sin  k L N   j 2N ( L1) n
e
sin  k N 
(3.32)
It does not look much simpler! But if we look only at its magnitude (the front
part), we get about as simple as we can get in DSP, which is to say not a lot.
The DTFSC for a general square pulse signal of width L and period N samples
is now given by
L


Ck   sin  k L N 
 sin  k N 

k  0, 1, 2,
else
(3.33)
The numerator has the duty cycle and the denominator is only a function of
the period. This function looks like the sinc function but it is not. It is the
Drichlet function. The Drichlet function is a periodic version the sinc
Chap 3 - Discrete-time Signals and Fourier series representation
46 | P a g e
In Chapter 2 we computed the spectrum of a square pulse signal. The
spectrum of that signal was a sinc function, whereas this one repeats because
this is a discrete signal. This is an important new development and worth
understanding.
Now we plot the coefficients of various such square pulse signals. These
should be studied so you develop an intuitive feel for what happens as the
sampling rate increases as well the effect of the duty cycle, i.e. the width of
the pulse vs. the period. Note that all of the spectrums repeat with the
sampling frequency.
Figure 3.33 - A square pulse has Drichlet response as we see in the DTFSC of the
signal. The width of the main lobe is inversely related of the width of the square
pulse relative to the period.
Chap 3 - Discrete-time Signals and Fourier series representation
47 | P a g e
Figure 3.34 – Spectrum of a periodic discrete square pulse signal. We see the
DTFSC change as the width of the square pulse increases relative to the period. In
(a) all we have are single impulses and hence the response is a flat line. In (e) as
the width gets larger, the DFSC take on an impulse like shape.
Figure 3.34 shows the DFSC for various widths of the square pulse. As the
pulses get wider, the response gets narrower, and when the pulse width is
equal to the period, hence it is all impulses, we get an impulse train for a
response. This is a very important effect to know.
The matrix form of the Fourier series
DFSC are computed in closed form equations for homework problems only.
For practical applications, we use Matlab and other digital devices. We will
now look at a matrix method of computing the DFSC. Let’s first define this
common term. It looks strange and confusing, but it is a very simple idea, we
are using it to separate out things that are constant.
Chap 3 - Discrete-time Signals and Fourier series representation
48 | P a g e
(3.34)
For a given value N 0 this term, called the W (also called the Twiddle factor) is
a constant and is being given a shorthand notation so as to make the equation
writing easier. We can write the discrete Fourier series (DFS) and inverse
discrete Fourier series (IDFS) using this factor as
DFS :
X [k ] 
N 0 1
 x[n]W
nk
N0
n 0
IDFS :
1
x[n ] 
N0
N 0 1
(3.35)
 X [k ]W
k 0
 nk
N0
In this form, the terms WNnk0 and WN0nk are same as
These terms can be pre-computed and stored to make computation quicker.
They are the basic idea behind the FFT algorithm which speeds up
computation. We can setup the DFSC equation in matrix form by using the
Twiddle factor and writing it in terms of the two variables, the index n and k.
Now we write
W -0´0
1
W -0´1
Ck =
x[n]
K0
W -0´2
W -0´3
W -1´0
W -1´1
W -1´2
W -1´3
W -2´0
W -2´1
W -2´2
W -2´3
W -3´0
W -3´1
W -3´2
W -3´3
(3.36)
Here we have assumed that K0 = 4. Each column represents the harmonic
index k and each row the time index, n. It takes 16 exponentiations, 16
multiplications and four summations to solve this equation. We will come
back to this matrix methodology again when we talk about DFT and FFT in
Chapter 5.
Chap 3 - Discrete-time Signals and Fourier series representation
49 | P a g e
Summary of Chapter 3
1. An ideal discrete signal is generated by sampling a continuous signal with
an impulse train of desired sampling frequency. The time between the
impulses is called the sample time.
2. The sampling frequency is the inverse of sample time. The sampling
frequency of an analog signal should be greater than two times the highest
frequency in the signal of interest in order to accurately reconstruct the signal.
3. The fundamental period of a discrete periodic signal, given by N 0 must be
an integer number of samples for the signal to be periodic in a discrete sense.
4. The fundamental discrete frequency of the signal, given by  0 is equal to
2 N 0 . It represents the advance of radians per sample.
5. The period of the digital frequency is defined as an integer multiple of 2  .
Harmonic discrete frequencies that vary by integer multiple of 2 , such as
2 k and 2 k  2 n are identical.
6. Because discrete harmonic frequencies are identical, we cannot use them to
create a Fourier series representation. Instead we divide the range from 0 to
2 by N 0 and use these sub-frequencies as the basis set.
7. Hence there are only K 0 harmonics available to represent a discrete signal.
The Fourier analysis is limited to these harmonics. The number of available
harmonics for the Fourier representation, K 0 is exactly equal to the
fundamental period of the signal, N 0 .
8. Beyond the 2 range of harmonic frequencies, the discrete-time Fourier
series coefficients, (DFSC) repeat because the harmonics themselves are
identical.
9. In contrast, the continuous-time signal coefficients are aperiodic and do not
repeat. This is because all harmonics of a continuous signal are unique.
Chap 3 - Discrete-time Signals and Fourier series representation
50 | P a g e
10. The coefficients of a discrete periodic signal are discrete just as they are
for the continuous-time signals. The coefficients around the zero frequency
are called the principal fundamental spectrum.
11. The principal spectrum of a discrete periodic signal, repeats with sampling
frequency, Fs .
12. Sometimes we can solve the coefficients using closed from solutions but in
a majority of the cases, matrix methods are used to find the coefficients of a
signal.
13. Matrix method is easy to setup but is computationally intensive. Fast
matrix methods are used to speed up the calculations. One such method is
called FFT or Fast Fourier Transform.
Copyright 2015 Charan Langton All Rights Reserved
Chap 3 - Discrete-time Signals and Fourier series representation
51 | P a g e