Department of
Electrical
Engineering
École
Polytechnique
de Montréal
David Haccoun, Eng., Ph.D.
Professor of Electrical Engineering
Life Fellow of IEEE
Fellow , Engineering Institute of Canada
CUHK, May 2010
Engineering training in Canada
36
schools/facultie
s
3
1
2
Vancouver
2
1
11
13
Undergraduate students
Canada: 55,000
Québec: 14,600
Montréal
2
1
Toronto
2
École Polytechnique,
cradle of engineering in Québec
The oldest engineering school in Canada .
The third-largest in Canada for teaching and research.
The first in Québec for the student body size.
Operating budget $85 million Canadian Dollars (C$).
Annual research budget $60.5 million C$.
Annual grants and research contracts $38 million C$.
15 Industrial Research Chairs.
24 Canada Research Chairs.
7863 scientific publications over the last decade.
220 professors, and 1,100 employees.
1,000 graduates per year, and 30,000 since 1873.
3
11 engineering programs
Biomedical
Geological
Mechanical
Civil
Industrial
Mining
Chemical
Computer
Electrical
Software
Engineering
physics
4
Our campus
Polytechnique
5
Novel Iterative Decoding
Using Convolutional Doubly Orthogonal Codes
A simple approach to capacity
David Haccoun
Éric Roy, Christian Cardinal
Modern Error Control Coding Techniques
Based on Differences Families
A new class of threshold decodable codes leading to simple
and efficient error control schemes.
No interleaver, at neither encoding nor decoding
Far less complex to implement than turbo coding schemes,
attractive alternatives to turbo coding at moderate Eb/N0 values
High rate codes readily obtained by puncturing technique
Low complexity and high speed FPGA-based prototypes at bit
rate >100 Mbps.
Extensions to recursive codes Capacity
– Rate adaptive schemes Punctured Codes
– Reduced latency
Simplified Codes
– Reduced complexity
7
MOTIVATION
Two Problems for Usual Turbo Coding
Decoding Complexity
(MAP Decoder)
Latency due to Interleavers
An alternative using CSO2C
Low Decoding Complexity
(Iterative Threshold Decoder)
One Convolutional Encoder
No Interleaver
Further improvements while
maintaining good error performance
Half Latency with Iterative BP Decoder
Smaller Latency with Simplified Self-Doubly-Orthogonal Codes
(S-CSO2C)
8
One-Dimensional NCDO Codes
Nonrecursive systematic convolutional (NSC) encoder ( R = 1/2 )
Information sequence
Shift register of length m
0
{ut }
1
D1
1=0
2
D2
...
j
... D
m-1
j
j
... ...
...
m
Dm
AWGN
Channel
J =m
...
{ytu }
{ut }
{ pt }
{ytp }
J
Parity sequence
A {1 , 2 ,, J }
1 0, J m
j { 0, 1, ... , m }
A – Set of connection positions
J – Number of connection positions
m – Memory length
J – Coding span
9
Example of Convolutional Self-Orthogonal Code
CSOC, R=1/2, J=4, m=15, A 0, 3, 13, 15
{ut }
{ut }
3 13 4 15
2 3
1 0
+
{ pt }
Simple orthogonal properties : CSOC
Differences ( j k ), j k , are distinct
10
Example of CSOC, J=4, A 0, 3, 13, 15
Distinct Simple Differences
j
k
0
0
3
13
15
-3
-13
-15
-10
-12
3
3
13
13
10
15
15
12
-2
2
All the simple differences are distinct
CSOC codes are suitable for threshold decoding
11
Threshold (TH) Decoding of CSOC
CSOC are Non iterative, systematic and non recursive
Well known symbol decoding technique that exploits
the simply-orthogonal properties of CSOC
Either hard or soft-input soft-output (SISO) decoding
Very simple implementation of majority logic
procedure
12
Example of One-Step Threshold Decoder
J = 3, A= {0, 1, 3}, dmin= 4
Soft outputs
in LLR
wtu J
D
D
D
(3 -0)=3 (3 -1)=2 (2 -1)=1
S
wtp J
D
3 =3
2 =1
0
>
<1 0
ûi
D
D
t
1 =0
Decoded
bits
= tanh/tanh-1 (sum-product) or add-min (min-sum) operator
u
t J
w
u
p
, wtp J are LLRs values representing the received symbols yt , y t
13
Novel Iterative Error Control Coding Schemes
Extension to Iterative Threshold Decoding
Convolutional Self-Doubly-Orthogonal Codes : CSO2C
1.
All the differences (j - k ) are distinct ;
2.
The differences of differences (j -k )–(l -n ), j k, k n, n l,
l j, must be distinct from all the differences (r - s ), r s ;
3.
The above differences of differences are distinct except for the
unavoidable repetitions
Decoder exploits the doubly-orthogonal properties of CSO2C
Asymptotic error performance (dmin=J+1 ) at moderate Eb/N0
Issues : Search and determination of new CSO2Cs
Extention of Golomb rulers problem (unsolved)
14
Example of CSO2C, J=4, A 0, 3, 13, 15
Differences of Differences
( j, k , n, l ), ( j k ) ( l n )
(0,1,0,1)=(( -3)-( 3))= -6
(0,2,0,1)=((-13)-( 3))= -16
(0,2,0,2)=((-13)-(13))= -26
(0,3,0,1)=((-15)-( 3))= -18
(0,3,0,2)=((-15)-(13))= -28
(0,3,0,3)=((-15)-(15))= -30
(1,0,1,0)=(( 3)-( -3))= 6
(1,2,0,2)=((-10)-(13))= -23
(1,2,1,0)=((-10)-( -3))= -7
(1,2,1,2)=((-10)-(10))= -20
(1,3,0,2)=((-12)-(13))= -25
(1,3,0,3)=((-12)-(15))= -27
(1,3,1,0)=((-12)-( -3))= -9
(1,3,1,2)=((-12)-(10))= -22
( j , k , n, l ), ( j k ) ( l n )
(1,3,1,3)=((-12)-(12))= -24
(2,0,1,0)=((13) -( -3))= 16
(2,0,2,0)=((13)-(-13))= 26
(2,1,0,1)=((10)-( 3))= 7
(2,1,2,0)=((10)-(-13))= 23
(2,1,2,1)=((10)-(-10))= 20
(2,3,0,1)=(( -2)-( 3))= -5
(2,3,0,3)=(( -2)-( 15))= -17
(2,3,1,0)=(( -2)-( -3))= 1
(2,3,1,3)=(( -2)-( 12))= -14
(2,3,2,0)=(( -2)-(-13))= 11
(2,3,2,1)=(( -2)-(-10))= 8
(2,3,2,3)=(( -2)-( 2))= -4
(3,0,1,0)=((15)-( -3))= 18
( j, k , n, l ), ( j k ) ( l n )
(3,0,2,0)=((15)-(-13))= 28
(3,0,3,0)=((15)-(-15))= 30
(3,1,0,1)=((12)-( 3))= 9
(3,1,2,0)=((12)-(-13))= 25
(3,1,2,1)=((12)-(-10))= 22
(3,1,3,0)=((12)-(-15))= 27
(3,1,3,1)=((12)-(-12))= 24
(3,2,0,1)=(( 2)-( 3))= -1
(3,2,0,2)=(( 2)-( 13))= -11
(3,2,1,0)=(( 2)-( -3))= 5
(3,2,1,2)=(( 2)-( 10))= -8
(3,2,3,0)=(( 2)-(-15))= 17
(3,2,3,1)=(( 2)-(-12))= 14
(3,2,3,2)=(( 2)-( -2))= 4
All the differences of differences are distinct
These codes are suitable for iterative threshold or
belief propagation decoding
15
Spans of some best known CSO2C encoders
Issue : minimization of memory length (span) m of encoders
Lower bound on span
m J f (J 4 )
J
m (span)
J
m (span)
J
m (span)
5
41
14
13774
23
402923
6
100
15
16503
24
502505
7
222
16
34908
25
643676
8
459
17
50071
26
965950
9
912
18
71858
27
1117924
10
1698
19
107528
28
1517378
11
3467
20
148787
29
1894067
12
5173
21
209013
30
2437586
13
9252
22
299126
16
Non-Iterative Threshold Decoding for CSOCs
Approximate MAP value
J
y j,i
u
i
j 1
i
i :
Received
= Inform. Symb. +
Extrinsic
Information
j 1
J
u
p
u
i yi yi j y(i j k ) (i j k )
j 1
k 1
k j 1
J
where
: Addmin operator;
Decision rule :
ûi=1 if and only if i
0 , otherwise ûi= 0
CSOC i is an equation of independent variables
17
Iterative Threshold Decoding for CSO2Cs
General Expression:
Estimation of
yiu at
Depends on the simple differences
Iteration
JJ
p j 1j 1 ( (1)1)
(())
y yi j
(i (i j jk k)) ((ii j k) )
k
k k11
j 1
k
jj11
J
( )
i
u
i
j
Feedforward for
future symbols
i() j k
y
u
i j k
n 1
J
p
( 2)
( 1)
yi j k n (i j k n l )
(i j k n l )
n 1
l 1
l n 1
J
n 1
J
p
( 1)
( )
yi j k n
(i j k n l ) (i j k n l )
n 1
l 1
l n 1
1 Iteration: Distinct Differences
18
2 Iterations: Distinct Differences of differences
Distinct Differences of differences from Differences
yiu j k
Feedback for
past symbols
depends on the simple differences
and on
the differences of differences
Iterative Expressions:
( 1)
i j k
k
J
Iterative Threshold Decoder Structure for CSO2Cs
Forward-Only Iterative Decoder
Delay m
Soft
t m
output
Information
symbols
threshold
decoder
Iteration
=1
...
Delay m
tI m
Soft
output
t 2m
Soft
output
threshold
decoder
Iteration
=2
...
...
Delay m
threshold
decoder
Iteration
=I
...
Soft t Mm
output
threshold
decoder
Iteration
=M
Hard
Decision
From channel
Parity-check
symbols
Last Iteration
Delay m
Delay m
...
Delay m
...
Decoded
Information
symbols
• No interleaver
Features : • One ( identical ) decoder per iteration
• Forward-only operation
19
Block Diagram of Iterative Threshold Decoder
(CSO2Cs)
One-step TH decoding per iteration
Iterative TH decoder ( M iterations
M one-step decoders)
Each one-step decoder for a distinct bit ut m , ut 2m ,, ut Mm
wtp
wtu
DEC 1
Latency m bits
DEC 2
DEC M
Latency m bits
Latency m bits
t(1)m
Input pt
For
u t
t( 2)2m
)
(t M Mm
Total Latency M m bits
wtpMm
wtuMm
0
1
uˆ t Mm
Output
For ut Mm
20
Iterative Belief Propagation (BP) Decoder of CSO2C
Threshold Decoder
wtp
wtu
DEC 1
DEC 2
DEC M(TH)
Latency
m bits
Latency
m bits
Latency
m bits
t( 2)2m
t(1)m
)
(M
t Mm
wtpMm
wtuMm
uˆ t Mm
0
1
BP Decoder
wtp
wtu
DEC 1
DEC 2
Latency m
bits
DEC M
Latency m
bits
t(1)m
M(BP) ~ ½ M(TH)
Latency m
bits
t(2)2m
)
(M
t Mm
(BP)
wtpMm
wtuMm
)
{vt(M
Mm, j }
0
1
uˆt Mm
BP Latency ~ ½ TH Latency
1-step BP complexity ~ J X 1-step TH
complexity
21
Error Performance Behaviors of CSO2Cs
J=9, A={0, 9, 21, 395, 584, 767, 871, 899, 912}
10
10
Bit error rate
10
10
10
10
BP
Waterfall region
-1
TH
Waterfall region
-2
-3
BP
Error floor region
-4
TH
Error floor region
-5
-6
TH, 8 iterations
BP, 8 iterations
10
-7
1.5
BP, 4 iterations
2
2.5
Both BP and TH decoding approach the asymptotic
error performance in error floor region
3
3.5
Eb/N0, dB
4
4.5
5
22
Analysis Results of CSO2Cs
Effects of Code Structure on Error Performance
With iterative decoding, error performance depends essentially on
the number of connections J , rather than on memory lengths m
(spans) .
Shortcomings of CSO2Cs
Best known codes: rapid increase of encoding spans with J :
m J f (J 4 )
Optimal codes unknown (Minimum span m )
Improvements : Span Reduction
Reduce span by relaxing conditions on the double orthogonality at
small degradation of the error performance
Simplified S-CSO2C
Search and determination of new S-CSO2Cs with minimal spans
23
Definition of S-CSO2Cs
1.
The set of connection positions A satisfies :
All the differences (j - k ) are distinct ;
2.
The differences of differences (j -k)-(l -n ), j k, k n, n l, l j,
are distinct from all the differences (r - s ), r s ;
3.
The differences of differences are distinct except for the unavoidable
repetitions and a number N de of avoidable repetitions
3
2
J
(
J
2
J
3J 2)
Maximal number of distinct differences of differences N
d
(excluding the unavoidable repetitions)
8
Number of repeated differences of differences
(excluding the unavoidable repetitions)
N de
N de
Normalized simplification factor
Nd
,
0 1/ 2
Search and determination of new short span S-CSO2Cs
24
yielding value
Comparison of Spans of CSO2Cs and S-CSO2Cs
J
CSO2C
m (span)
S-CSO2C
J
CSO2C
m (span)
S-CSO2C
m (span)
5
41
0.3818
23
14
13774
0.4269
1967
6
100
0.4333
45
15
16503
0.4253
2653
7
222
0.4416
82
16
34908
0.4313
3532
8
459
0.4828
129
17
50071
0.4246
4978
9
912
0.4895
208
18
71858
0.4002
6905
10
1698
0.4917
340
19
107528
0.4053
8748
11
3467
0.4539
588
20
148787
0.3923
9749
12
5173
0.4632
894
13
9252
0.4193
1217
m (span)
25
Performance Comparison for J=10 S-CSO2C
Uncoded BPSK
coding gain
asymptotic coding gain
26
Performance Comparison for J=8 Codes (BP Decoding)
CSO2C:
A = { 0, 43, 139, 322, 422, 430, 441, 459 }
S-CSO2C: A = { 0, 9, 22, 55, 95, 124, 127, 129 }
27
Performance Comparison CSO2Cs / S-CSO2Cs
(TH Decoding)
Eb/No = 3.5
dB
th
8
iteration
BER
CSO2C
S-CSO2C
3000
14000
Latency (x 104 bits)
28
Analysis of Orthogonality Properties (span)
Simple
Orthogonality
Convolutional SelfOrthogonal Codes
(CSOC)
Small Span m
Extension
Orthogonal
properties
of set A
Double
Orthogonality
Convolutional SelfDoubly-Orthogonal
Codes (CSO2C)
Large Span
m f (J 4 )
Relaxed
Conditions
Relaxed Double
Orthogonality
Simplified CSO2C
(S-CSO2C)
Substantial
Span Reduction
29
Analysis of Orthogonality Properties (computational tree)
The computational tree represents
the symbols used by the decoder to
estimate each information symbol in
the iterative decoding process.
Error performances function of
Independency VS Short cycles
Analysis shows that the parity
symbols are limiting the decoding
performances of the iterative
decoder because of their degree 1
in the computational tree
(no descendant nodes).
Decoded symbol
( )
i
j 1
J
p
( 1)
( )
y yi j (i j k ) (i j k )
j 1
k 1
k j 1
J
u
i
LLR for final hard decision
( )
i
j
...
...
k
Iter (-1)
...
pt j
...
n
...
...
...
l
Impact : The decoder does not
...
...
update these values over the
pt ( j k n )
Iter (-2)
iterative decoding process : limiting
error performances.
Simple orthogonality Independence of inputs over ONE iteration
Double orthogonality Independence of inputs over TWO iterations
30
Analysis of Orthogonality Properties (cycles)
Codes
Conditions
on associated sets
CSOC
Distinct
differences
No 4-cycles
CSO2C
Distinct
differences
from difference of
differences
Minimization of
Number of
6-cycles
Uniformly
Distributed
Distinct
differences of
differences
Minimization of
Number of
8-cycles
Uniformly
Distributed
S-CSO2C
A number of
repetitions of
differences of
differences
Cycles on Graphs
A Number of
Additional
8-cycles
Approximately
Uniformly
Distributed
31
Summary of Single Register CSO2Cs
Structure of Tanner Graphs for Iterative Decoding
No 4–cycles
A minimal number of 6–cycles which are due to the unavoidable
repetitions
A minimal number of 8–cycles
Uniform distribution of the 6 and 8–cycles
Relaxing doubly orthogonal conditions of CSO2C adds some 8-cycles
leading to codes with substantially reduced coding spans S-CSO2C
Error performance
Asymptotic coding gain G 10 log (Rd ) 10 log J 1 dB
10
min
10
2
d
J
1 at moderate
Correspond to the minimum Hamming distance min
Eb/N0 values.
32
Extension : Recursive Convolutional Doubly-Orthogonal Codes
(RCDO)
In order to improve the
error performances of
the iterative decoding
algorithm the degree
of the parity symbols
must be increased
( )
i
j
...
...
k
pt j
...
n
...
Solution : Use Recursive
Convolutional Encoders
(RCDO)
...
...
...
l
...
...
pt ( j k n )
33
RCDO codes
RCDO are systematic recursive convolutional encoder
RCDO can be represented by their sparse parity-check matrix HT(D)
Forward connections
Feedback connections
RCDO encoder example : R=3/6, 3 inputs 6 outputs
34
RCDO protograph structure
The parity-check matrix HT(D) completely defined the
RCDO codes.
The memory of the RCDO encoder m is defined by the
largest shift register of the encoder
Each line of HT(D) represents one output symbol of the
encoder.
Each column of HT(D) represents one constraint equation.
Protograph representation of a RCDO codes is defined by
HT(D).
The degree distributions of the nodes in the protograph
become important in the convergence behavior of the
decoding algorithm.
Regular RCDO (dv, dc) : dv = degree of variable (rows)
dc = degree of constraint (col.)
(same numbers of nonzero elements of HT(D) )
Irregular RCDO protograph
35
RCDO doubly-orthogonal conditions
The analysis of the computational tree of RCDO codes shows that, as for the
CSO2C, three conditions based on the differences must be respected by
the connection positions of the encoder.
For RCDO the decoding equations are completely independent over 2
decoding iterations.
Estimation of parity symbols are now improved from iteration to iteration
Resulting in improving the error performances
36
RCDO codes error performances
Error performances of RCDO (3,6) codes, R=1/2, 25th iteration
Characteristics :
50th iter
LDPC
n=1008
Small shift registers
decoder limit’
RCDO (3,6)
1.10 dB
Increasing
number of shift registers
Error performances
VS
number of shift registers
Low number of iterations
compared to LDPC
The complexity per decoded symbol of all the decoders associated with
the RCDOs ( in this figure ) is smaller than the one offered by the LDPC
decoder of block length 1008. Attractive for VLSI implementation.
RCDO codes error performances
Characteristics :
Coding rate-15/30
15 registers
m = 149
Regular HT(D) (3,6)
40th Iteration
Close to optimal
convergence
behavior of the
iterative decoder.
After 40 iterations
0.4 dB
Low error floor
Asymptotic error performances of RCDO close to BP decoder limit
38
Comparisons
Error performances comparisons with other existing techniques
Pb = 10-5
RCDO
good error performance at low SNR
CSO2C
good error performances at moderate SNR
Figure from : C. Schlegel and L. Perez,Trellis and Turbo coding, Wiley, 2004.
39
Comparison of the techniques
Block length N , Iterations M
CSO2C
RCDO
LDPC
Implementation Complexity
Encoding
Decoding
Low
Low
Low
Low
High
High
Per Iteration Processing
Size of operating window
Number of decoded bits
N/M
1
N/M
1
N
N
Moderate
Moderate
Decreasing
Low
Low
Decreasing
Very small
Low
Flat
Error Performance
Eb/N0 (Waterfall region)
BER (Error floor)
Error floor tendency
40
Conclusion
New iterative decoding technique based on systematic doubly
orthogonal convolutional codes : CSO2C, RCDO.
CSO2C : good error performances at moderate Eb/No :
Single shift register encoder; J dominant
Recursive doubly orthogonal convolutional codes RCDO.
Error performances improvement at low Eb/No.
Multiple shift registers encoder ; m dominant
Error performances comparable to those of LDPC block codes.
Simpler encoding and decoding processes.
Attractive for VLSI high speed implementations
Searching for optimal CSO2C & RCDO codes : open problem
41
42
© Copyright 2026 Paperzz