ELT-43007 DIGITAL COMMUNICATION - Exercise 1, Spring 2017
Problem 1.
Given the simplicity of these three examples, we don’t have to follow the
whole mathematical procedure as in previous cases, but we can proceed in
a faster, more intuitive way. Let’s start once again by assigning arbitrary
input probabilities PX (x1) = q and PX (x2) = 1 - q.
a) Output probabilities are identical to input probabilities:
PY (y1) = q
PY (y2) = 1 - q
Output entropy is therefore H(Y) = -[q log2 q + (1 - q) log2 (1 - q)]
The conditional entropy is then:
H(Y|X) = -[q (1 log2 1 + 0 log2 0) + (1 - q) (0 log2 0 + 1 log2 1)] = 0
We can find mutual information as I(X, Y) = H(Y) - H(Y|X) = H(Y) which is
maximum for q = 0.5 (since it is identical to the binary entropy function
seen in lecture notes), and then capacity C = 1.
The physical interpretation of this case is simply that everything the source
sends into the channel reaches the destination without uncertainty, and no
information is lost.
©Tampere University of Technology
1/15
ELT-43007 DIGITAL COMMUNICATION - Exercise 1, Spring 2017
b) Output probabilities are merely switched around:
PY (y1) = 1 - q
PY (y2) = q
Output entropy, conditional entropy, mutual information and channel
capacity are therefore just the same as in previous case:
H(Y) = -[q log2 q + (1 - q) log2 (1 - q)]
H(Y|X) = -[q (1 log2 1 + 0 log2 0) + (1 - q) (0 log2 0 + 1 log2 1)] = 0
I(X, Y) = H(Y) - H(Y|X) = H(Y) = -[q log2 q + (1 - q) log2 (1 - q)]
And therefore C = 1 for q = 0.5. The channel changes every input symbol
into the other, but this means that the destination can always know what
symbol was transmitted, without uncertainty, so once again no information
is lost.
c) In this case the destination always receives the same output symbol:
PY (y1) = 1
PY (y2) = 0
Output entropy is H(Y) = 1 log2 1 + 0 log2 0 = 0
©Tampere University of Technology
2/15
ELT-43007 DIGITAL COMMUNICATION - Exercise 1, Spring 2017
Conditional entropy is:
H(Y|X) = -[q (1 log2 1 + 1 log2 1) + (1 - q) (0 log2 0 + 0 log2 0)] = 0
Mutual information is then also I(X, Y) = H(Y) - H(Y|X) = 0 for any value
of the input symbols probabilities, therefore capacity is C = 0.
Because the destination invariably receives the same output symbol, there
is no way to tell which input symbol was transmitted. In other words there
is no information ever going through the channel, which is the physical
meaning of zero channel capacity.
©Tampere University of Technology
3/15
ELT-43007 DIGITAL COMMUNICATION - Exercise 1, Spring 2017
Problem 2.
In order to determine the channel capacity, we usually find out an
expression for the average mutual information and maximize it with
respect to the source probabilities. However, there is an easier way to
determine the channel capacity if the channel has a certain symmetry, as
this usually makes the actual calculations much simpler to carry out. To
check the symmetry condition, it is convenient to write the conditional
probabilities into a conditional probability matrix P = {pij} = {PY|X(yi|xj)}.
Then, if each row (column) of P is just a permutation of any other row
(column), then equally probable input symbols maximize the mutual
information. “Physically” this means that under such a symmetry
condition, H(Y|X) turns out to be independent of the source probabilities
and in addition, equally probable input symbols make the output symbols
also equally probable. Then clearly I(X, Y) = H(Y) - H(Y|X) is maximized.
For our channel, the conditional probability matrix is given by
[
P X ∣Y ( y 1∣ x 1) P X ∣Y ( y 1∣ x 2)
][ ]
1/ 3
P ( y ∣ x ) P X ∣ Y ( y 2∣ x 2 )
1/ 3
P = X ∣Y 2 1
=
1/ 6
P X ∣Y ( y 3∣ x 1) P X ∣ Y ( y 3∣ x 2)
1/ 6
P X ∣Y ( y 4∣ x 1 ) P X ∣ Y ( y 4∣ x2 )
and so the wanted symmetry conditions are fulfilled.
©Tampere University of Technology
4/15
1 /6
1 /6
1/ 3
1/ 3
ELT-43007 DIGITAL COMMUNICATION - Exercise 1, Spring 2017
As a consequence, PX (x1) = PX (x2) = 1/2 will maximize the average mutual
information. The output symbol probabilities are then for all symbols (total
probability formula again, which is actually quite an important result from
basic probability theory):
PY (yi) = PY|X (yi|x1) PX (x1) + PY|X (yi|x2) PX (x2) = 1/3 ∙ 1/2 + 1/6 ∙ 1/2 = 1/4
So the mutual information (and the capacity in this case) is given by
I ( X , Y ) = H (Y ) - H (Y | X )
= log 2 (4) +
= 2+
å P ( x) å P
xÎW X
X
yÎWY
Y |X
( y | x)log 2 ( PY | X ( y | x) )
1
( p1 log 2 ( p1 ) + p1 log 2 ( p1 ) + p2 log 2 ( p2 ) + p2 log 2 ( p2 ) )
2
1
( p2 log 2 ( p2 ) + p2 log 2 ( p2 ) + p1 log 2 ( p1 ) + p1 log 2 ( p1 ) )
2
= 2 + 2 p1 log 2 ( p1 ) + 2 p2 log 2 ( p2 )
+
2
æ1ö 2
æ1ö
= 2 + log 2 ç ÷ + log 2 ç ÷
3
è3ø 6
è6ø
» 0.0817 bits / channel use
= CS
In this example, the capacity is quite small (when compared to the
maximum capacity of 1 bits / channel use that could in general be achieved
with binary inputs). Can you explain that intuitively ?
©Tampere University of Technology
5/15
ELT-43007 DIGITAL COMMUNICATION - Exercise 1, Spring 2017
In the figure below, the previous capacity
CS = 2 + 2 p1 log 2 ( p1 ) + 2 p2 log 2 ( p2 )
is depicted as a function of p1. Notice that p1 + p2 = 0.5 or p2 = 0.5 – p1
(why?), so there really is only one “free” variable describing the channel
quality in this case.
©Tampere University of Technology
6/15
ELT-43007 DIGITAL COMMUNICATION - Exercise 1, Spring 2017
Problem 3.
Let’s proceed as in previous case by expressing the conditional probability
matrix for this channel:
[
][
P X ∣Y ( y1∣ x 1) P X ∣ Y ( y 1∣ x 2) P X ∣Y ( y1∣ x3)
1− p
0
p
P = P X ∣Y ( y 2∣ x 1) P X ∣Y ( y 2∣ x 2) P X ∣Y ( y 2∣ x3 ) = p
1− p
0
0
p
1− p
P ( y ∣x ) P ( y ∣x ) P ( y ∣x )
X ∣Y
3
X ∣Y
1
3
X ∣Y
2
3
3
]
Also in this case we find the symmetry property, therefore we know that
equally probable input symbols PX (x1) = PX (x2) = PX (x3) = 1/3 maximize
the mutual information, and output symbols will also be equiprobable with
PY (y1) = PY (y2) = PY (y3) = 1/3.
The output entropy is:
H (Y )= ∑ P Y ( y )log 2
y ∈ΩY
[
] (
)
1
1
=3 log 2 3 =log 2 3
P Y ( y)
3
The conditional entropy of the output given the input is:
H (Y ∣ X )= ∑ P X (x ) ∑ P Y ∣ X ( y∣ x)log 2
x ∈Ω X
y ∈ΩY
©Tampere University of Technology
7/15
[
1
P Y ∣ X ( y∣ x)
]
=
ELT-43007 DIGITAL COMMUNICATION - Exercise 1, Spring 2017
= -PX (x1) [(1 - p) log2 (1 - p) + p log2 p + 0 log2 0]
-PX (x2) [0 log2 0 + (1 - p) log2 (1 - p) + p log2 p]
-PX (x3) [p log2 p + 0 log2 0 + (1 - p) log2 (1 - p)] =
= -[(1 - p) log2 (1 - p) + p log2 p]
And so the channel capacity (mutual information already maximized) is
C = max I(X,Y) = max [H(Y) - H(Y|X)] =
= log2 3 + (1 - p) log2 (1 - p) + p log2 p
Now, because (1 - p) log2 (1 - p) + p log2 p is the well-known capacity CBSC
of a binary symmetric channel, we can see how the capacity in this case is
really C = CBSC + log2 3, and has maximum value for p = 0 or 1, and
minimum for p = 1/2 (same as CBSC).
We can see that in the first special case with p = 0, the channel effectively
becomes as in the following picture, where the received symbol is always
identical to the transmitted symbol, and the channel capacity is now the
maximum possible C = log2 3 + 1 = 1.585.
©Tampere University of Technology
8/15
ELT-43007 DIGITAL COMMUNICATION - Exercise 1, Spring 2017
Also when p = 1 the channel capacity is the maximum C = 1.585. That’s
because there is still a one-to-one relation between each input symbol and
output symbol, meaning that the destination has no uncertainty over what
symbol was transmitted by the source. In this case the channel becomes
the following:
©Tampere University of Technology
9/15
ELT-43007 DIGITAL COMMUNICATION - Exercise 1, Spring 2017
For p = 0.5 we have minimum capacity C = 0.585 corresponding to
maximum conditional entropy i.e. uncertainty. However this minimum is
not zero as in previous problems. This is because even in the worst case,
from the received symbol the destination always knows that one of the
three possible input symbols could have never been transmitted.
Note: in this example the maximum achievable channel capacity is higher
than 1 bit per channel utilization (or bit per symbol). This is because we
have 3 possible input symbols, so that every input symbol is itself worth
more than 1 bit of self-information.
©Tampere University of Technology
10/15
ELT-43007 DIGITAL COMMUNICATION - Exercise 1, Spring 2017
Problem 4.
In each possible channel state, the signal-to-noise ratio depends on the
channel gain:
SNRi = SRi / N = SX gi2 / N0W
where the received signal power SRi is simply equal to the transmitted
signal power SX multiplied by the power gain gi2 (square of the amplitude
gain). The bandwidth is known, and so the noise power is:
N = N0W = 2 ∙ 0.5 ∙ 10-9 W/Hz ∙ 30 kHz = 3 ∙ 10-5 W = 0.03 mW
The instantaneous channel capacity Ci in each state is given by the
Shannon formula:
Ci = W log2 (1 + SNRi)
We can then calculate the instantaneous channel capacities from the SNR:
SNR1 = 10 mW ∙ (0.05)2 / 0.03 mW = 0.8333 ≈ −0.8 dB
SNR2 = 10 mW ∙ (0.5)2 / 0.03 mW = 83.33 ≈ 19 dB
SNR3 = 10 mW ∙ (1)2 / 0.03 mW = 333.3 ≈ 25 dB
©Tampere University of Technology
11/15
ELT-43007 DIGITAL COMMUNICATION - Exercise 1, Spring 2017
C1 = 30 kHz ∙ log2 ( 1 + 0.8333 ) = 30 kHz ∙ 0.874 bps/Hz = 26.2 kbps
C2 = 30 kHz ∙ log2 ( 1 + 83.33 ) = 30 kHz ∙ 6.398 bps/Hz = 191.9 kbps
C3 = 30 kHz ∙ log2 ( 1 + 333.3 ) = 30 kHz ∙ 8.385 bps/Hz = 251.6 kbps
a) The ergodic capacity can be found by calculating the statistical average
of the channel capacity given by the Shannon formula:
C=∑ W log 2 ( 1+SNRi ) P ( g i )
i
= C1 P(g1) + C2 P(g2) + C3 P(g3) =
= 26.2 ∙ 0.1 + 191.9 ∙ 0.5 + 251.6 ∙ 0.4 kbps = 199.2 kbps
b) The average SNR would be:
SNR = SNR1 P(g1) + SNR2 P(g2) + SNR3 P(g3) =
= 0.8333 ∙ 0.1 + 83.33 ∙ 0.5 + 333.3 ∙ 0.4 = 175.1
and so a corresponding non-fading channel with such SNR would have the
following capacity:
C = 30 kHz ∙ log2 ( 1 + 175.1 ) = 30 kHz ∙ 7.46 bps/Hz = 223.8 kbps
©Tampere University of Technology
12/15
ELT-43007 DIGITAL COMMUNICATION - Exercise 1, Spring 2017
We can note here that the flat fading channel has almost 25 kbps smaller
(ergodic) capacity compared to a non-fading channel with the same
bandwidth and average SNR.
c) Outage represents the case when the instantaneous SNR of a fading
channel is below the minimum required for correct (i.e. error-free, or more
precisely "with arbitrarily small bit error rate") detection of the signal at
the receiver, so we can say Pout = P(SNR < SNRmin). When outage occurs,
the receiver expects the transmitter to re-transmit the missed data at a later
time. This corresponds to the capacity being effectively reduced by a factor
of (1 - Pout).
Assuming that the transmitter is working at a fixed bitrate tailored to the
SNRmin then the average transmitted bit rate is:
Cout = (1 - Pout) W log2 ( 1 + SNRmin)
Note that in our case we know what are the probabilities of each channel
state, so the outage probability becomes a design choice.
In the first case we require Pout < 10% (i.e. strictly smaller than 0.1), but
considering that the worst channel state g1 already has P(g1) = 0.1, this
means that we must require SNRmin to be equal to the SNR in this state,
©Tampere University of Technology
13/15
ELT-43007 DIGITAL COMMUNICATION - Exercise 1, Spring 2017
which we have already found to correspond to a capacity C1 = 26.2 kbps.
In other words, our receiver must be capable of correctly decoding the
signal received in all three possible states, and in order to do so the
transmitted is forced to transmit data according to the worst case i.e. with
bitrate equal to C1. On the other hand, at this point outage never actually
occurs, because in all states SNR > SNRmin , and therefore Pout effectively
becomes 0. This means that also Cout = C1 = 26.2 kbps.
In the second case we require Pout = 10%. Because this is the same
probability of the channel state g1, this means that we can accept the
channel to be in outage every time it is in this state, and so we don't have
to support the corresponding SNR1. We need only to support the SNR
values corresponding to the other two (better) states, and so SNRmin = SNR2.
The corresponding capacity is then C2 = 191.9 kbps. However the average
transmitted bitrate is now reduced by an amount equal to the outage
probability itself, and Cout = (1 - Pout) C2 = 172.75 kbps.
In the third case we require Pout = 60%. This corresponds to the channel to
be either in state g1 or g2, meaning that we only need to support correct
detection in state g3, and so SNRmin = SNR3. In this case the corresponding
capacity when detection is working correctly is C3 = 251.6 kbps, but this
happens only 40% of the time, and therefore the average transmitted
bitrate is Cout = (1 - Pout) C3 = 125.78 kbps.
©Tampere University of Technology
14/15
ELT-43007 DIGITAL COMMUNICATION - Exercise 1, Spring 2017
By comparing the three effective average transmitted bitrates (which is
often referred to as "capacity with outage") we can see that the best design
choice is to require Pout = 10%, resulting in the highest Cout = 172.75 kbps.
©Tampere University of Technology
15/15
© Copyright 2026 Paperzz