Cooperation on the Multiple

Cooperation on the Multiple-Access Channel
X1,t
M1
(M̂1 , M̂2 )
Yt
Tx1
Channel
X2,t
M2
Rx
Tx2
Michèle Wigger
ETH Zurich, [email protected]
Chalmers University of Technology, 21 October 2008
Signal and Information
Processing Laboratory
Institut für Signal- und
Informationsverarbeitung
Based on Collaborations with Amos Lapidoth and Shraga I. Bross
Multiple-Access Channel (MAC): Many-to-One
M1
X1,t
Yt
Tx1
(M̂1 , M̂2 )
Channel
M2
X2,t
Tx2
Rx
No transmitter cooperation (classical MAC)
I
Each transmitter knows only its own message
I
Joint code design
I
Time-sharing
2 /23
Multiple-Access Channel (MAC): Many-to-One
M1 , M2
X1,t
Yt
Tx1
(M̂1 , M̂2 )
Channel
X2,t
M1 , M2
Tx2
Rx
Full transmitter cooperation
I
Both transmitters know both messages
I
Joint encodings of messages
2 /23
Multiple-Access Channel (MAC): Many-to-One
M1
C21
X1,t
Yt
Tx1
C12
(M̂1 , M̂2 )
Channel
X2,t
M2
Tx2
Rx
In this talk: partial transmitter cooperation
1. MAC with conferencing encoders (e.g. closely located transmitters)
I
Before transmission, Txs communicate over rate-limited bit-pipes
I
Txs can communicate parts of their messages to other Tx
I
⇒ Txs can cooperate based on pipe-outputs
2 /23
Multiple-Access Channel (MAC): Many-to-One
M1
X1,t
Yt
Tx1
X2,t
(M̂1 , M̂2 )
Channel
M2
Tx2
Rx
In this talk: partial transmitter cooperation
2. MAC with imperfect feedback (e.g. uplink/downlink scenarios)
I
Txs observe (noisy) feedback from channel outputs
I
Cooperation? Past channel outputs give information about both messages
I
⇒ Txs can cooperate in future transmissions
2 /23
Capacity Region for Two-User MAC
M2 ∼ U{1, . . . , b2nR2 c}
I
M1 ∼ U{1, . . . , b2nR1 c},
I
n: block-length of transmission
I
R1 , R2 : rates of transmission
I
Capacity C: closure of set of pairs
(R1 , R2 ) for which ∃ block-length n
schemes such that
p(error) → 0
R2
C
n → ∞.
R1
3 /23
Additive White Gaussian Noise (AWGN) MAC
M1 Transmitter 1
X1,t
Yt
+
Receiver
(M̂1 , M̂2 )
X2,t
M2 Transmitter 2
I
Yt = X1,t + X2,t + Zt ,
I
{Zt } ∼ IID N (0, N )
I
Power constraints:
Zt
t ∈ {1, . . . , n}
1
n
Pn
t=1
£ 2 ¤
E Xν,t
≤ Pν ,
ν ∈ {1, 2}
4 /23
Additive White Gaussian Noise (AWGN) MAC
M1
Transmitter 1
X1,t
Yt
+
Receiver
(M̂1 , M̂2 )
X2,t
M2
Transmitter 2
Zt
No transmitter cooperation (classical MAC)
CNoCoop =

(R1 , R2 ) :




≤
 R1





R2
≤
R1 + R2
≤



¢

P1
1


2 log 1 + N
¢
¡
P2
1


2 log 1 + N

¡
¢
P1 +P2 
1
log
1
+
2
N
¡
R2
CNoCoop
R1
5 /23
Additive White Gaussian Noise (AWGN) MAC
M1 , M2
Transmitter 1
Yt
X1,t
+
Receiver
(M̂1 , M̂2 )
X2,t
M1 , M2
Transmitter 2
Zt
R2
Full transmitter cooperation
(
CFullCoop =
CFullCoop
(R1 , R2 ) :
√
µ
¶)
P1 + P2 + 2 P1 P2
1
R1 + R2 ≤ log 1 +
2
N
CNoCoop
R1
5 /23
What about Partial Transmitter Cooperation?
Goal: Capacity of settings with partial transmitter cooperation!
I
CNoCoop ⊆ CPartialCoop ⊆ CFullCoop
I
Are inclusions strict?
I
Does partial coop. help at all?
I
Is partial coop. ≈ full coop.?
R2
CFullCoop
⇒ Important design issues
I
CPartialCoop
CNoCoop
R1
We consider two scenarios:
I
AWGN MAC with Conferencing Encoders
I
AWGN MAC with Feedback
6 /23
Part 1:
AWGN MAC with Conferencing Encoders
7 /23
AWGN MAC with Conferencing Encoders
M1
-
X1,t
Transmitter 1
V1,k
M2
-
@
6
V2,k
?
X2,t
Transmitter 2
¡
Zt
R?
@
i µ
¡
Receiver
(M̂1 , M̂2 )
-
1. phase: Conference (Willems’83):
I
I
I
κ sequential uses of the perfect bit-pipes
¢
¢
(κ) ¡
(κ) ¡
V1,k = f1,k M1 , V2k−1
V2,k = f2,k M2 , V1k−1
Rate-limitations:
κ
X
k=1
log |V1,k | ≤ nC12
and
κ
X
k=1
log |V2,k | ≤ nC21
8 /23
AWGN MAC with Conferencing Encoders
M1
-
X1,t
Transmitter 1
V1,k
M2
-
@
6
V2,k
?
X2,t
Transmitter 2
¡
Zt
R?
@
i µ
¡
Receiver
(M̂1 , M̂2 )
-
2. phase: Transmission over channel
(n)
X1,t = ϕ1,t (M1 , V2κ )
(n)
X2,t = ϕ2,t (M2 , V1κ )
8 /23
Main Result for Conferencing Encoders
Theorem 1: Capacity region of AWGN MAC with Conferencing Encoders
CConf=

µ
¶
P1 (1−ρ21 )


1


R
≤
log
1
+
+
C


1
12
2
σ2






µ
¶


2


P2 (1−ρ2 )


1

[ 
R2
≤
log
1
+
+
C
21
2
N
(R1 , R2 ) :
¶
µ


P1 (1−ρ21 )+P2 (1−ρ22 )

ρ1 ,ρ2 

+ C12 + C21
R1 + R2 ≤ 12 log 1 +

∈[0,1] 
N






³
´
√




P
+P
+2ρ
ρ
P
P
1
1
2
1 2
1 2


R1 + R2 ≤
log
1
+
2
N
9 /23
Main Result for Conferencing Encoders
Theorem 1: Capacity region of AWGN MAC with Conferencing Encoders
CConf=

µ
¶
P1 (1−ρ21 )


1


R
≤
log
1
+
+
C


1
12
2
σ2






µ
¶


2


P2 (1−ρ2 )


1

[ 
R2
≤
log
1
+
+
C
21
2
N
(R1 , R2 ) :
¶
µ


P1 (1−ρ21 )+P2 (1−ρ22 )

ρ1 ,ρ2 

+ C12 + C21
R1 + R2 ≤ 12 log 1 +

∈[0,1] 
N






³
´
√




P
+P
+2ρ
ρ
P
P
1
1
2
1 2
1 2


R1 + R2 ≤
log
1
+
2
N
If and only if
C12 = C21 = 0
R2
CFullCoop
CConf
CNoCoop
R1
9 /23
Main Result for Conferencing Encoders
Theorem 1: Capacity region of AWGN MAC with Conferencing Encoders
CConf=

µ
¶
P1 (1−ρ21 )


1


R
≤
log
1
+
+
C


1
12
2
σ2






µ
¶


2


P2 (1−ρ2 )


1

[ 
R2
≤
log
1
+
+
C
21
2
N
(R1 , R2 ) :
¶
µ


P1 (1−ρ21 )+P2 (1−ρ22 )

ρ1 ,ρ2 

+ C12 + C21
R1 + R2 ≤ 12 log 1 +

∈[0,1] 
N






³
´
√




P
+P
+2ρ
ρ
P
P
1
1
2
1 2
1 2


R1 + R2 ≤
log
1
+
2
N
R2
If C12 , C21 6= 0,
but “small”
CFullCoop
If C12 > 0 or C21 > 0
C12
CConf
CNoCoop ⊂ CConf
CNoCoop
strictly!
C21
R1
9 /23
Main Result for Conferencing Encoders
Theorem 1: Capacity region of AWGN MAC with Conferencing Encoders
CConf=

µ
¶
P1 (1−ρ21 )


1


R
≤
log
1
+
+
C


1
12
2
σ2






µ
¶


2


P2 (1−ρ2 )


1

[ 
R2
≤
log
1
+
+
C
21
2
N
(R1 , R2 ) :
¶
µ


P1 (1−ρ21 )+P2 (1−ρ22 )

ρ1 ,ρ2 

+ C12 + C21
R1 + R2 ≤ 12 log 1 +

∈[0,1] 
N






³
´
√




P
+P
+2ρ
ρ
P
P
1
1
2
1 2
1 2


R1 + R2 ≤
log
1
+
2
N
If and only if
³
C12 , C21 ≥ 21 log 1 +
´
√
P1 +P2 +2 P1 P2
N
R2
CFullCoop
CConf
CNoCoop
R1
9 /23
Achievability (Inner Bound)
I
Transmitters split messages: M1 = (M1,c , M1,p ) and M2 = (M2,c , M2,p )
I
Conference: Transmitters exchange M1,c and M2,c over bit-pipes
I
Rate of M1,c < C12 and rate of M2,c < C21
I
Transmitters use Gaussian codebooks and add up codewords for
transmission over AWGN MAC
I
Successive decoding at the receiver
No superposition encoding and joint decoding necessary!
⇒ easier than Willems’s scheme
10 /23
Converse (Outer Bound)
I
Converse as in Willems’83, but accounting for power constraints:
CConf ⊆
where
RX1 ,U,X2
[
X1 (−
−U (−
−X2
E[X12 ]≤P1 , E[X22 ]≤P2

R1



R2
, (R1 , R2 ) :
R
+ R2

1


R1 + R2
≤
≤
≤
≤
RX1 ,U,X2

I(X1 ; Y |X2 U ) +C12



I(X2 ; Y |X1 U ) +C21
I(X1 X2 ; Y |U ) +C12 +C21 


I(X1 X2 ; Y )
Propose technique to prove:
Suffices to take union over Gaussian Markov Triples X1G (−
−U G (−
−X2G
Traditional Max-Entropy techniques fail because of Markov condition!
11 /23
Technique also Applies to Cover-Leung Region
M1
M2
-
-
X1,t
Transmitter 1
@
X2,t
Transmitter 2
¡
Zt
R?
@
i
µ
¡
Yt
-
Receiver
(M̂1 , M̂2 )
-
6
Achievable region for AWGN MAC with perfect partial or perfect feedback
RCL ,
[
X1 (−
−U (−
−X2
E[X12 ]≤P1 ,
E[X22 ]≤P2


(R1 , R2 ) :


R1
≤ I(X1 ; Y |X2 U )
R2
≤ I(X2 ; Y |X1 U )

R1 + R2 ≤ I(X1 X2 ; Y )
Suffices to take union over Gaussian Markov triples X1G (−
−U G (−
−X2G !
12 /23
Technique also Applies to Slepian-Wolf Region
M1
-
X1,t
Transmitter 1
@
M0
M2
-
X2,t
Transmitter 2
¡
Zt
R?
@
i
Yt
-
Receiver
(M̂0 , M̂1 , M̂2 )
-
µ
¡
Capacity region for AWGN MAC with common message
RSW

R1



[
R2
(R , R ) :
,
R1 + R2
 1 2
X1 (−
−U (−
−X2 

2
R
+ R1 + R2
0
E[X1 ]≤P1 ,
E[X22 ]≤P2
≤
≤
≤
≤

I(X1 ; Y |X2 U )


I(X2 ; Y |X1 U )
I(X1 X2 ; Y |U )


I(X1 X2 ; Y )
Suffices to take union over Gaussian Markov triples X1G (−
−U G (−
−X2G !
13 /23
Dirty-Paper MAC with Conferencing Encoders
M1
-
?
Transmitter 1
-
@
6
V1,k
M2
St
X1,t
V2,k
?
X2,t
Transmitter 2
¡
Zt
?
R?
@
i- i
µ
¡
-
Receiver
(M̂1 , M̂2 )
-
6
I
{St } ∼ IID N (0, Q)
I
Transmitters know interference S n , (S1 , . . . , Sn ) non-causally
I
Inputs Xνn , (Xν,1 , . . . , Xν,n ) at Transmitter ν:
(n)
X1n = ϕ1 (M1 , V2κ , S n )
(n)
X2n = ϕ2 (M2 , V1κ , S n )
14 /23
Dirty-Paper MAC with Conferencing Encoders
M1
-
?
Transmitter 1
-
@
6
V1,k
M2
St
X1,t
V2,k
?
X2,t
Transmitter 2
¡
Zt
?
R?
@
i- i
µ
¡
-
Receiver
(M̂1 , M̂2 )
-
6
2 Settings:
I
Transmitters learn S n before the conference
I
I
(κ)
V1,k = f1,k (M1 , V2k−1 , S n )
and
(κ)
V2,k = f1,k (M2 , V1k−1 , S n )
Transmitters learn S n after the conference
14 /23
Interference acausally known at Txs can perfectly be
canceled
M1
-
?
Transmitter 1
-
@
6
V1,k
M2
St
X1,t
V2,k
?
X2,t
Transmitter 2
¡
Zt
R?
@
i
i- ?
µ
¡
-
Receiver
(M̂1 , M̂2 )
-
6
Theorem 2
For two-user Gaussian MAC with conferencing encoders:
CInt,before = CInt,after = CConf ,
∀Q ≥ 0,
if interference known non-causally at both encoders.
15 /23
Part 2:
AWGN MAC with Feedback
16 /23
AWGN MAC with Perfect Feedback
M1
Transmitter 1
X1,t
Yt
+
Receiver
X2,t
M2
I
(M̂1 , M̂2 )
Zt
Transmitter 2
Transmitters observe perfect causal output feedback:
(n)
Xν,t = ϕν,t (Mν , Y1 , . . . , Yt−1 ),
ν ∈ {1, 2}.
17 /23
AWGN MAC with Perfect Feedback
M1
Transmitter 1
X1,t
Yt
+
Receiver
X2,t
M2
Zt
Transmitter 2
R2
Ozarow’84:
CPerfectFB
 =
(R1 , R2 ) :



 R1
≤

[ 
(M̂1 , M̂2 )
³
2
´








)
log 1 + P1 (1−ρ
N
´
³
2
)
1
R2
≤
log 1 + P2 (1−ρ


2
N


ρ∈[0,1] 


´
³
√




 R1 + R2 ≤ 1 log 1 + P1 +P2 +2 P1 P2 ρ 
2
N
1
2
CFullCoop
CPerfectFB
CNoCoop
R1
17 /23
AWGN MAC with Noisy Feedback
V1,t
M1
W1,t
+
Transmitter 1
X1,t
+
Yt
Receiver
(M̂1 , M̂2 )
X2,t
M2
Transmitter 2
+
Zt
V2,t
W2,t
I
Noisy feedback:
Vν,t = Yt + Wν,t ,
I
{(W1,t , W2,t )} ∼ IID N (0, KW1 W2 ) ,
ν ∈ {1, 2}
Transmitters observe noisy feedback:
(n)
Xν,t = ϕν,t (Mν , Vν,1 , . . . , Vν,t−1 ),
ν ∈ {1, 2}.
18 /23
AWGN MAC with Noisy Feedback
V1,t
M1
W1,t
+
Transmitter 1
X1,t
+
Yt
Receiver
(M̂1 , M̂2 )
X2,t
M2
Transmitter 2
+
Zt
V2,t
W2,t
?
I
CNoisyFB =
I
CNoCoop ⊆ CNoisyFB ⊆ CPerfectFB
I
Are inclusions strict?
I
Previous schemes collapse to CNoCoop
if fb-noise variances too large
R2
CFullCoop
CPerfectFB
CNoCoop
CNoisyFB
R1
18 /23
Noisy or Perfect Partial Feedback to Tx 2
M1
Transmitter 1
X1,t
Yt
+
Receiver
(M̂1 , M̂2 )
X2,t
M2
Transmitter 2
+
Zt
V2,t
W2,t
I
Noisy partial feedback:
V2,t = Yt + W2,t ,
I
Transmitter 1 has no feedback:
I
Transmitter 2 observes noisy feedback:
(n)
¢
¡
{W2,t } ∼ IID N 0, σ22
X1,t = ϕ1,t (M1 )
(n)
X2,t = ϕ2,t (M2 , V2,t , . . . , V2,t−1 ),
I
σ22 = 0: perfect partial feedback
19 /23
Noisy or Perfect Partial Feedback to Tx 2
M1
Transmitter 1
X1,t
Yt
+
Receiver
(M̂1 , M̂2 )
X2,t
M2
Transmitter 2
+
Zt
V2,t
W2,t
R2
?
I
CNoisyPartialFB =
I
CNoCoop ⊆ CNoisyPartialFB ⊆ CPerfectFB
I
Are inclusions strict?
CFullCoop
CPerfectFB
CNoCoop
CNoisyPartialFB
R1
19 /23
Main Results for Noisy Feedback
Theorem 3a: Noisy feedback is always beneficial!
I
Noisy feedback always increases capacity region
I
Fixed P1 , P2 , N > 0:
CNoCoop ⊂ CNoisyFB ,
∀ KW1 W2 º 0.
Inclusion is strict!
Theorem 4: Almost-perfect feedback ≈ perfect feedback!
I
Capacity with noisy feedback converges to perfect-feedback capacity
I
Fixed P1 , P2 , N > 0 :
³ [
´
\
cl
CNoisyFB (P1 , P2 , N, K) = CPerfectFB (P1 , P2 , N )
σ 2 ≥0 K: tr(K)≤σ 2
20 /23
Main Results for Partial Feedback
Theorem 3b: Noisy partial feedback is always beneficial!
I
Noisy partial feedback always increases capacity region
I
Fixed P1 , P2 , N > 0:
CNoCoop ⊂ CNoisyPartialFB ,
∀ σ22 ≥ 0.
Inclusion is strict!
Theorem 5: Perfect partial-feedback capacity 6= Cover-Leung region!
(Answer to van der Meulen)
I
Perfect partial-feedback capacity > Cover-Leung region
21 /23
Robust Noisy-Feedback Scheme: Concatenated Structure
W1,t
V1,t
+
M1
Outer Enc.1
Inner Enc.1
X1,t
Yt
+
X2,t
M2
Outer Enc.2
Inner Enc.2
Inner Dec.
Outer Dec.
(M̂1 , M̂2 )
Zt
+
V2,t
W2,t
Inner Channel, η channel uses of original channel
I
Inner Encoders/Decoder: (Exploit Feedback)
I
I
I
I
Use feedback
Use original channel η times per fed symbol
Generalize Ozarow’s perfect-feedback scheme; linear encodings/decodings
Outer Encoders/Decoder: (Robustify Inner Scheme)
I
I
I
Ignore feedback
Use inner channel once every η channel uses of original channel
Code to achieve capacity of inner channel
22 /23
Summary
I
I
AWGN MAC with conferencing encoders
I
Determined capacity region
I
Conference always increases capacity
I
New technique for proving opt. of Gaussians under a Markovity constraint
I
Acausally known interference at both txs can perfectly be canceled
AWGN MAC with imperfect feedback
I
Robust noisy-feedback scheme
I
Feedback always increases capacity region, even if very noisy or only partial
I
Almost-perfect feedback ≈ perfect feedback
I
Cover-Leung region 6= perfect partial-feedback capacity (answer to v.d.
Meulen)
23 /23