Partial Information Capacity for Interference Channels

Partial Information Capacity for Interference
Channels
Vaneet Aggarwal
Salman Avestimehr
Ashutosh Sabharwal
Department of ELE,
Princeton University,
Princeton, NJ 08540.
[email protected]
Department of ECE,
Cornell University,
Ithaca, NY 14853.
[email protected]
Department of ECE,
Rice University,
Houston, TX 77005.
[email protected]
Abstract—In distributed wireless networks, nodes often do not
know the complete channel gain information of the network.
Thus, they have to compute their transmission and reception
parameters in a distributed fashion. In this paper, we consider
the cases when each transmitter knows the channel gain to its
intended receiver only and when each transmitter knows the
channel gain of all the links that are at-most two-hop distant from
it in S-channel chain, one-to-many and many-to-one interference
channel. We formalize the concept of partial information capacity
and find that the partial information capacity in the case
when each transmitter knows the channel gain to its intended
destination only is achieved by the standard scheduling algorithm
which schedules non-interfering users and is well studied in the
literature. The partial information capacity in the case when the
transmitter knows all the links that are at-most distant two-hop
from it is achieved by scheduling of the subgraphs for which there
exist a universally optimal strategy with two-hop knowledge at
the transmitters.
I. I NTRODUCTION
One of the fundamental challenges in mobile wireless
networks is lack of complete network state information with
any single node. In fact, the common case is when each node
has a partial view of the network, which is different from
other nodes in the network. As a result, the nodes have to
make distributed decisions based on their own local view of
the network. One of the key question then arises is how often
do distributed decisions lead to globally optimal decisions.
The study of distributed decisions and their impact on
global information-theoretic sum-rate performance was initiated in [1] for two special case of deterministic channels,
and then extended to Gaussian version of those topologies
in [2]. The authors proposed a protocol abstraction which
allows one to narrow down to relevant cases of local view
per node. The authors proposed a message passing protocol,
in which both transmitters and receivers participate to forward
messages regarding network state information to other nodes in
the network. The local message-passing allows the information
to trickle through the network and the longer the protocol
proceeds, the more they can learn about the network state.
More precisely, the protocol proceeds in rounds, where each
round consists of a message by each transmitter followed by
a message in response by each receiver. Half rounds are also
allowed, where only transmitters send a message. One of the
main results in [2] is that with 1.5 rounds of messaging, the
gap between network capacity based on distributed decisions
and that based on centralized decisions can be arbitrarily large
for a three-user double-Z channel (two Z-channels stacked on
each other). Thus, for some channel gains, decisions based on
the nodes’ local view can lead to highly suboptimal network
operation. A complete characterization of all connectivities
in single-hop K-user deterministic interference channels [3–
7] for which there exists a universally optimal with 1.5
rounds of messaging was further provided in [8]. A scheme
is considered to be universally optimal if for all channel
gains, the distributed decisions lead to sum-rate which is same
as sum-capacity with full information. With 1.5 rounds of
messaging, a transmitter knows all channel gains which are
two hops away from it and a receiver knows all gains which
are three hops away from it. So if the network diameter is
larger than three, then no node in the network has full information about the network. Thus, while the capacity of general
interference channel is still unknown (even with full network
state information), we can characterize which topologies can
be universally optimal with partial information.
In this paper, we take a step forward to understand what
maximum percentage of sum capacity can be guaranteed when
the exact sum capacity cannot be achieved. For this, we
first define partial information capacity (PIC) which is the
maximum percentage of sum capacity that can be achieved.
We consider two models of partial information for MAC,
S-interference channel chain, one-to-many and many-to-one
interference channels with increasing partial knowledge. The
first model is when each of the transmitter knows the direct
channel gain to its intended receiver. In this case, we find
that scheduling non-interfering users is optimal. Thus, this is
the first information-theoretic result that proves optimality of
scheduling in these channels from a sum-capacity viewpoint.
The second model is when each transmitter knows all the
channel gains that are at-most two hop distant from it. In
[8], we found that a universally optimal strategy exists for
only those networks whose all connected components are
either fully-connected or are one-to-many connected. Thus,
we know that in these cases, partial information capacity is
1. However, the above condition is not satisfied in the cases
of S-channel chain and the many-to-one connectivity. Thus,
partial information capacity is less than 1. We find partial
K−1
for
information capacity in the two cases as 2/3 and 2K−3
number of users K ≥ 3. This capacity can be achieved
by scheduling over connectivies which are of the form such
that all connected components either have full connectivity or
have one-to-many connectivity. This result proves optimality
of this new scheduling for these cases. This is an achievability
approach for general interference channels which can improve
the sum capacity by a significant percentage in many cases.
For instance, in any three user interference channel with atmost two connected components, this strategy achieves partial
information capacity of at-least 2/3 while the usual scheduling
approach that schedules non-interfering users cannot achieve
more than 1/2.
The rest of the paper is organized as follows. In Section
II, we give some preliminaries and definitions that will be
used throughout the paper. We also define partial information
capacity and give an example on MAC channel in this section. In Section III, IV and V, we present our main results
on S-channel chain, one-to-many interference channel and
many-to-one interference channel respectively, we discuss the
scheduling based achievability in Section VI, and Section VII
concludes the paper.
II. P RELIMINARIES AND D EFINITIONS
A. Channel Model
Consider a deterministic interference channel with K transth
mitters and K receivers, where the
inputs at k transmitterTin
time i can be written as Xk [i] = Xk1 [i] Xk2 [i] . . . Xkq [i] ,
k = 1, 2, · · · , K, such that Xk1 [i] and Xkq [i] are the most
and the least significant bits, respectively. The received signal
of user j, j = 1, 2, · · · , K, at time i is denoted by the
T
vector Yj [i] = Yj1 [i] Yj2 [i] . . . Yjq [i] . Associated with
each transmitter k and receiver j is a non-negative integer
nkj that defines the number of bit levels of Xk observed
at receiver j. The maximum level supported by any link is
q = maxj,k (njk ). The network can be represented by a square
matrix H whose (j, k)th entry is njk . Note H need not be
symmetric.
Specifically, the received signal Yj [i] is given by
Yj [i]
=
K
X
Sq−nkj Xk [i]
(1)
k=1
q−njk
where ⊕ denotes the XOR operation, and S
is a q ×
q shift matrix with entries Sm,n that are non-zero only for
(m, n) = (q − njk + n, n), n = 1, 2, . . . , njk .
We assume that that there is a direct link between every
transmitter Ti and its intended receiver Di . On the other hand,
if a cross-link between transmitter i and receiver j does not
exist, then Hij ≡ 0. Given a network, its connectivity is a
set of edges E = {(Ti , Dj )} such that a link Ti − Dj is not
identically zero. Then the set of network states, G, is the set
of all weighted graphs defined on E. Note that the channel
gain can be zero but not guaranteed1 to be if the node pair
(Ti , Dj ) ∈ E.
We will mainly consider three connectivities: S-channel
chain, one-to-many connectivity and many-to-one connectivity. In S-channel chain, (Ti , Dj ) ∈
/ E for all j 6= i or i + 1.
In one-to-many connectivity, (Ti , Dj ) ∈
/ E for all i 6= 1 and
j 6= i. In many-to-one connectivity, (Ti , Dj ) ∈
/ E for all j 6= K
and j 6= i.
B. Partial Information Capacity
For each user k, let message index mk be uniformly
distributed over {1, 2, ..., 2nRk }. The message is encoded
as Xkn using the encoding functions ek (mk |Nk , SI) :
{1, 2, . . . , 2nRk } 7→ {0, 1}nq , which depend on the local
view, Nk , and side information about the network, SI. The
message is decoded at the receiver using the decoding function
dk (Ykn |Nk0 , SI) : {0, 1}nq 7→ {1, 2, . . . , 2nRk }, where Nk0 is
the receiver local view and SI is the side information. The
corresponding probability of decoding error λk (n) is defined
as Pr[mk 6= dk (Ykn |Nk0 , SI)]. A rate tuple (R1 , R2 , · · · , RK )
is said to be achievable if there exists a sequence of codes
such that the error probabilities λ1 (n), · · · λK (n) go to zero
as n goes to infinity for all network states consistent with
P the
side information. The sum capacity is the supremum of i Ri
over all possible encoding and decoding functions.
We will now define partial information rate and partial
information capacity (PIC) that will be used throughout the
paper. These notions represent how much percentage of global
knowledge sum capacity can be achieved with partial information.
Definition 1. Partial information rate of α is said to be
achievable in a set of network states with partial information
if there exists a strategy that each of the transmitter i uses
based on its local information Ni and side information SI,
such that following holds. The strategy yields a sequence of
codes having rates Ri at the transmitter i such that the error
probabilities at the receiver, λ1 (n), · · · λK (n), go to zero as
n goes to infinity, satisfying
X
Ri ≥ αCsum
i
for all the sets of network states consistent with the side
information. Here Csum is the sum-capacity of the whole
network with the full information.
Definition 2. Partial information capacity is defined as the
supremum over all achievable partial information rates.
In [2], we defined universally optimal strategy. A universally
optimal strategy exists for a connectivity if and only if partial
information capacity for that connectivity is 1. We further
found that the question of existence of a universally optimal
strategy can be answered in a general K-user interference
channel in [8] which can be stated as below.
1 The model is inspired by fading channels, where the existence of a link
is based on its average channel gain. On the average the link gain may be
above noise floor but its instantaneous value can be below noise floor.
Theorem 1 ([8]). There exist a universally optimal strategy for
a K-user interference channel when each transmitter knows
the link gains of all the links that are at-most two-hop distant
from it if and only if all the connected components of the
topology are in one-to-many configuration or fully-connected
configuration.
In this paper, we will consider side information SI as
the network connectivity and the local information at the
transmitters as the only direct link gain information to its
intended receiver, or knowledge of link gains of all the links
that are at-most two-hop distant from it. In both the cases, the
receiver information would be assumed as the union of the
information of all the transmitters to which it is connected to.
C. Network 1: Multiple Access Channel
We now describe the case of partial information capacity in
a K-user multiple access channel when each transmitter only
knows the link gain from it to the receiver (User i has link
gain ni .). In this case, partial information capacity is 1/K
which can be achieved by simply scheduling one user at a
time in a total of K time-slots. We will now describe the
converse arguments. For this, we will assume that K > 1 as
otherwise the result holds trivially. Let partial information rate
of α > 1/K is achievable. Then,
1
(2)
Ri > ni ∀1 ≤ i ≤ K
K
Now, we will show that this rate-tuple cannot be achieved.
With the capacity bound of full information,
RK
≤
<
max ni −
1≤i≤K
max ni −
1≤i≤K
K−1
X
Ri
i=1
K−1
X
1
K
(3)
nK −
K −1
nK
K
A. Only direct link knowledge
Theorem 2. The partial information capacity of a S-channel
chain when each of the transmitter knows only its direct link
to the receiver is,
(
1
if K = 1,
(8)
P IC =
1/2 if K > 1.
Proof: When K = 1, the interference channel is a pointto-point channel with global information and the same capacity
as with the global information can be achieved.
We will first prove a converse for K > 1. For this, we only
assume K = 2 where the interference channel reduces to a Schannel. We will prove the result by contradiction. Let partial
information rate of α > 1/2 is achievable. Then,
1
ni ∀1 ≤ i ≤ 2
(9)
2
Now, we will show that this rate-tuple cannot be achieved.
The compound sum capacity of the two user S-channel when
none of the nodes know n12 is an outer bound. Thus,
Ri >
≤ min max(n11 , n22 , n12 , n11 + n22 − n12(10)
)
R1 + R2
n12
=
(4)
i=1
(6)
1
nK
(7)
K
Thus, RK < nK /K which contradicts (2) and hence the
assumption of partial information rate of α > 1/K is wrong
which proves the converse on partial information capacity.
Since all the links are at-most two hops from each transmitter, the partial information capacity in the case when each
transmitter knows all the links that are at-most two hop distant
from it is 1.
≤
III. N ETWORK 2: S- CHANNEL CHAIN
In this section, we consider a S-channel chain when each
transmitter knows only the channel gain to its own receiver,
max(n11 , n22 )
(11)
This can be rewritten as
R1
st
Since the 1
R1
ni
Since the K th transmitter does not know the value of ni for
1 ≤ i ≤ K − 1,
"
#
K−1
1 X
RK <
min
max ni −
ni
(5)
ni ,1≤i≤K−1 1≤i≤K
K i=1
≤
and when each transmitter knows all links that are at-most
two-hop distant from it.
≤ max(n11 , n22 ) − R2
(12)
< max(n11 , n22 ) − n22 /2
(13)
transmitter does not know the value of n22 ,
< min [max(n11 , n22 ) − n22 /2]
n22
(14)
≤ n11 − n11 /2
(15)
1
≤
n11
(16)
2
Thus, R1 < n11 /2 thus showing the contradiction and proving
that the partial information capacity is ≤ 1/2. The outer-bound
for 2-user interference channel also holds for K-user S-chain
since the partial information capacity when the rest of K − 2
users have 0 channel gains to their receivers which is globally
known at the transmitters is an outer bound to the required
partial information capacity.
For the achievability, consider the data transfer over two
time slots. In the first time-slot, even users transmit data
while in the second P
time-slot, odd users transmit data. This
K
way the sum rate of i=1 nii /2 is achieved which is at-least
half the sum capacity with global information and thus partial
information capacity of 1/2 can be achieved.
B. Two hop link knowledge
Theorem 3. The partial information capacity of a S-channel
chain when each of the transmitter knows all the links that
are within 2-hop distant from it is,
(
1
if K ≤ 1,
P IC =
2/3 if K > 2.
(17)
Proof: For K ≤ 2, the partial information capacity is 1
follows from Theorem 1. So, we will only consider K ≥ 3 in
the proof.
For the converse, we take K = 3 and all the rest of the
direct channel gains be 0 and known to all. If all the channel
gains are all n, for α > 2/3, R1 < n1 /3 since it can happen
from its point of view that n33 = 0 in which case it has to
assume that the second transmitter is sending at rate > 2/3n.
Thus, the sum rate is < 4/3n = 2/3(2n) which contradicts
α > 2/3.
For the achievability, consider the data transfer over three
time slots. In the ith time slot, all users 3j + i remain silent
for all integers i, j satisfying 1 ≤ j < ∞ and 1 ≤ i ≤ 3.
In each time-slot, the effective topology has all its connected
subcomponents in the one-to-many configuration or in the
fully connected configuration and each user uses the optimal
strategy in this case. Let (R1 , R2 , · · · , RK ) be any point
th
in the global information
P capacity region. In the i timeslot, sum rate of atleast 1≤j≤K,j6=imod3 R
i can be achieved.
P
Hence, the effective sum rate of atleast 2/3 1≤i≤K Ri can be
achieved thus proving the achievability of partial information
rate of 2/3.
IV. N ETWORK 3: O NE - TO - MANY INTERFERENCE
CHANNEL
In this section, we consider a one-to-many when each
transmitter knows only the channel gain to its own receiver,
and when each transmitter knows all links that are at-most
two-hop distant from it.
A. Only direct link knowledge
Theorem 4. The partial information capacity of a one-tomany interference channel when each of the transmitter knows
only its direct link to the receiver is,
(
1
if K = 1,
P IC =
(18)
1/2 if K > 1.
Proof: When K = 1, the interference channel is a pointto-point channel with global information and the same capacity
as with the global information can be achieved.
The converse for K > 1 follows on the same lines as the
proof of Theorem 2 and is thus omitted.
For the achievability, consider the data transfer over two
time slots. In the first time-slot, only the first user transmits
while in the second time-slot, all
PKbut the first user transmits
data. This way the sum rate of i=1 nii /2 is achieved which
is at-least half the sum capacity with global information and
thus partial information capacity of 1/2 can be achieved.
B. Two hop link knowledge
Theorem 5. The partial information capacity of a one-tomany interference channel when each of the transmitter knows
all the links that are within 2-hop distant from it is,
P IC = 1 for all K > 0.
(19)
Proof: This follows directly from Theorem 1 since there
exists an universally optimal strategy for this topology.
V. N ETWORK 4: M ANY- TO - ONE INTERFERENCE CHANNEL
In this section, we consider a many-to-one interference
channel when each transmitter knows only the channel gain
to its own receiver, and when each transmitter knows all links
that are at-most two-hop distant from it.
A. Only direct link knowledge
Theorem 6. The partial information capacity of a many-toone interference channel when each of the transmitter knows
only its direct link to the receiver is,
(
1
if K = 1,
P IC =
(20)
1/2 if K > 1.
Proof: When K = 1, the interference channel is a pointto-point channel with global information and the same capacity
as with the global information can be achieved.
The converse for K > 1 follows on the same lines as the
proof of Theorem 2 and is thus omitted.
For the achievability, consider the data transfer over two
time slots. In the first time-slot, only the K th user transmits
th
while in the second time-slot, all
PKbut the K user transmits
data. This way the sum rate of i=1 nii /2 is achieved which
is at-least half the sum capacity with global information and
thus partial information capacity of 1/2 can be achieved.
B. Two hop link knowledge
Theorem 7. The partial information capacity of a many-toone interference channel when each of the transmitter knows
all the links that are within 2-hop distant from it is,
(
1
if K ≤ 2,
P IC = K−1
(21)
if K > 2.
2K−3
Proof: For K ≤ 2, the partial information capacity is 1
follows from Theorem 1. So, we will only consider K ≥ 3 in
the proof.
We will first prove the converse by contradiction. Suppose
K−1
that partial information rate of α > 2K−3
can be achieved.
K−1
Then, RK > 2K−3 nKK since it has to send at this rate when
all other direct channel gains are 0 and are not known to user
K. Now, suppose all the channel gains be n. In this case,
K−1
K−2
Ri < n − 2K−3
n = 2K−3
n for 1 ≤ i ≤ K − 1. Thus,
K−2
the sum rate achieved is less than (K − 2) 2K−3
+1 n =
K−1
(K − 1)n 2K−3
because RK−1 + RK ≤ n. Since the optimal
K−1
sum rate is (K − 1)n, the sum rate within a factor of 2K−3
cannot be achieved thus showing the contradiction. Hence, the
K−1
partial information capacity cannot be > 2K−3
.
For the achievability, consider the data transfer over 2K −
3 time slots. In the timeslot i satisfying 1 ≤ i ≤ K − 1
users i and K transmit. They form an S-channel and use the
optimal strategy for this channel with partial information. In
the remaining K − 2 timeslots, users 1, · · · , K − 1 transmit
at full rate. Let (R1 , R2 , · · · , RK ) be any point in the global
information capacity region. In the ith time-slot where 1 ≤
i ≤ K − 1, sum rate of atleast Ri + RK can be achieved
while
in the remaining K − 2 timeslots, sum rate of atleast
P
1≤i≤K−1 Ri can be achieved. Thus, the sum capacity with
K−1
can be achieved.
a factor of 2K−3
VI. D ISCUSSIONS
In this paper, we considered two forms of partial information at the nodes. The first is when each of the transmitter
knows the link gain to only its receiver. This case results in the
optimality of scheduling non-interfering users. In a Multiple
Access Channel or the up-link problem, this is equivalent to
scheduling one user at a time and achieves partial information
capacity of 1/K. For the S-chain, by turning on the even or
odd users in the timeslots, we are assigning non-interfering
users to transmit in each timeslot thus achieving a partial
information capacity of 1/2. In the one-to-many and many-toone interference channel also, scheduling non-interfering users
is optimal. Thus, we provide the first information-theoretic
formalism that proves the optimality of scheduling in these
networks.
The second form of partial information considered in the
paper is when each transmitter knows link gains of all the
links that are at-most two hop distant from it. In this case,
we had found in [8] that the connectivities for which there
exist an universally optimal strategy are the ones which have
all their connected components in one-to-many configuration
or in the fully connected configuration. In this paper, we
saw that scheduling with each scheduling breaking the total
connectivity into the connectivity for which a universally
optimal strategy exist in each time-slot gives optimal partial
information capacity. In S-channel chain, we divided the
achievable strategy into three time-slots. In each time-slot, the
effective connectivity was in which each connected subcomponent consists of either 2 users or 1 user and universally optimal
strategy exist if all connected components consist of at-most
two users. For the many-to-one interference channel, 2K − 3
tine-slots were used. In each of the time-slot, the maximum
users in each connected component are two.
This paper suggests a new scheduling strategy with two hops
information at each transmitter. Suppose d time-slots are used
and in each time-slot, the connectivity is effectively reduced
to a connectivity in which all the connected components are in
one-to-many configuration or in fully-connected configuration.
If each user is turned on in at-least t time-slots, then partial
information rate of t/d can be achieved. Note that interference
avoidance scheduling uses in each time slot connectivity of
single user connected components and is hence a special
case. Thus, this better scheduling strategy can be used when
two hops of information are available at each transmitter
and has been proved optimal in this paper for many channel
configurations.
VII. C ONCLUSIONS
We consider a general deterministic interference channel
in which the nodes know the partial information about the
channel gains. We consider the case when the nodes know
only the channel gain from then to the intended receiver. In this
case, we find that scheduling non-interfering users achieves the
sum capacity. We then consider the case when each transmitter
knows the channel gains of all the links that are at-most twohop distant from it. With this limited information, this paper
proves the optimality of scheduling subgraphs which have
connectivity in which all the connected components are either
one-to-many connected or fully connected for S-channel chain
and many-to-one interference channels.
The results in this paper can be easily extended to Gaussian
interference channels by defining the concept of approximate
partial information capacity which is on the lines of approximate universally optimal strategies as defined in [2]. This
extension to Gaussian can be seen in [9].
The problem of optimal strategy with d-hop information at
each transmitter in general interference channels is a problem
of great importance, and is still open.
R EFERENCES
[1] V. Aggarwal, Y. Liu, and A. Sabharwal, “Message passing in distributed
wireless networks,” in Proc. IEEE International Symposium on Information Theory, Seoul, Korea, Jun-Jul 2009.
[2] V. Aggarwal, Y. Liu and A. Sabharwal, “Sum-capacity of interference
channels with a local view: Impact of distributed decisions,” submitted
to IEEE Transactions on Information Theory, Oct 2009, available at
arXiv:0910.3494v1.
[3] A. S. Avestimehr, S. N. Diggavi, and D. N. C. Tse, “A deterministic
model for wireless relay networks and its capacity,” in Proc. IEEE
Information Theory Workshop, Bergen, Norway, July 2007.
[4] A. S. Avestimehr, S. N. Diggavi, and D. N. C. Tse, “Wireless network
information flow: a deterministic approach,” submitted to IEEE Transactions on Information Theory, Aug 2009, available at arXiv:0906.5394v2.
[5] R.H. Etkin, D.N.C. Tse and H. Wang, “Gaussian interference channel
capacity to within one bit,” IEEE Transactions on Information Theory,
vol. 54, no. 12, pp. 5534-5562, Dec 2008.
[6] G. Bresler and D. Tse, “The two-user Gaussian interference channel:
A deterministic view,” Euro. Trans. Telecomm., vol. 19(4), pp. 333-354,
June 2008.
[7] G. Bresler, A. Parekh and D. Tse, “The approximate capacity of
the many-to-one and one-to-many Gaussian interference channels,”
arXiv:0809.3554v1, 2008.
[8] V. Aggarwal, S. Avestimehr, and A. Sabharwal, “Distributed Universally
Optimal Strategies for Interference Channels with Partial Message
Passing,” in Proc. Allerton Conference on Communication, Control, and
Computing, Monticello, IL, Sept-Oct 2009.
[9] V. Aggarwal, S. Avestimehr, and A. Sabharwal, “Information Dynamics
in Interference Channels,” in Preparation for IEEE Trans. Inf. Th. Sp.
Issue on Interference Channels, Mar 2010.