Degree-Matched Check Node Decoding for

Degree-Matched Check Node Decoding for Regular
and Irregular LDPCs
Sheryl L. Howard, Christian Schlegel and Vincent C. Gaudet
Dept. of Electrical & Computer Engineering
University of Alberta
Edmonton, AB Canada T6G 2V4
Email: sheryl,schlegel,[email protected]
Abstract— This paper examines different parity-check node
decoding algorithms for low-density parity-check (LDPC) codes,
seeking to recoup the performance loss incurred by the min-sum
approximation compared to sum-product decoding. Two degreematched check node decoding approximations are presented
which depend on the check node degree dc . Both have low
complexity and can be applied to any degree distribution.
Simulation results show near-sum-product decoding performance
for both degree-matched check node approximations, for regular
and irregular LDPCs.
I. I NTRODUCTION
Low-density parity-check codes (LDPCs) [1] are a class
of iteratively decoded error-control codes whose capacityapproaching performance and simple decoding algorithm have
resulted in their inclusion in new communications standards
such as DVB-S2, IEEE 802.11n, 802.3an and 802.16. LDPCs
are iteratively decoded by a message-passing algorithm operating between variable and check nodes of the LDPC; each node
type performs its own decoding operation. Typically, LDPCs
are decoded by belief propagation [2], also known as sumproduct decoding, or its suboptimal approximation, the minsum algorithm [3] which is easily implemented but results in
performance loss.
Several approximations exist that aim to recover some of
the performance loss incurred by the min-sum approximation.
However, none offer a general expression matched to the
degree dc of the check node (the number of variable nodes
connected to a check node). Irregular codes have check nodes
of varying degrees; an approximation with a single value
applied to all check nodes either will not be well-matched
to an irregular code, or if optimized for a specific degree
distribution, is best suited to that distribution. Moreover,
different rate codes have very different check node degrees.
We seek a general approximation matched to the check node
degree dc , applicable to any regular or irregular LDPC without
requiring optimization to a specific degree distribution.
This paper is organized in the following manner. Section
II discusses several LDPC check node decoding algorithms,
including sum-product decoding and the min-sum approximation. A correction factor between sum-product decoding
Copyright (c) 2006 IEEE. Personal use of this material is permitted. However, permission to use this material for any other purposes must be obtained
from the IEEE by sending an email to [email protected].
and min-sum decoding for degree 3 check nodes is examined,
along with existing approximations to that correction factor. In
Section III, the maximal value for the general correction factor
for any degree check node is presented and used as a basis for
two new degree-matched check node approximations of low
complexity. Simulations for both degree-matched check node
approximations are presented in Section IV, and compared
with the performance of sum-product, min-sum and other
decoding algorithms, for both regular and irregular LDPC
codes. Section V concludes this paper.
II. LDPC C HECK N ODE D ECODING A LGORITHMS
Several algorithms exist for LDPC decoding at the check
nodes. Sum-product decoding exactly represents the extrinsic
output log-likelihood ratios (LLRs) based on the incoming
LLRs λ and the parity check constraints of each check node.
For a cycle-free code, which is well-approximated by LDPCs
with large girth or cycle length, sum-product decoding is
optimal, performing MAP (maximum a posteriori) decoding.
Sum-product decoding at a parity check node of degree dc
calculates extrinsic output LLR messages Lj→i from check
node j to variable node i, given input LLR messages λk→j
from variable node k to check node j, as
Qdc
λk→j
.
(1)
Lj→i = 2 tanh−1
tanh
k=1
2
k6=i
The extrinsic principle of excluding the self-message λi→j is
used in all check node decoding algorithms described herein.
A simple approximation to the sum-product algorithm is the
min-sum algorithm [3], [4], calculated as
Qdc
Lj→i = |λmin | k=1
sign (λk→j ) ,
(2)
k6=i
where λmin is the minimum magnitude extrinsic input LLR.
The min-sum approximation is simple to implement, but costs
some tenths of a dB over sum-product decoding performance.
If all input LLR messages but one are large, the min-sum
output is quite accurate, as the smallest LLR dominates the
tanh product. If not, the min-sum approximation overestimates
the output LLR, compared to sum-product decoding.
Several approximations have been developed addressing this
performance loss vs complexity tradeoff between sum-product
decoding and the min-sum approximation. Most start with the
min-sum approximation and reduce the overestimated value
by subtraction of a correction factor or by division.
2
Another approach, taken in [5] and the λ-min algorithm [6],
lowers the complexity of sum-product decoding by using only
the λ smallest LLR magnitudes in (1). However, the λ-min
algorithm still performs sum-product calculations.
Normalized BP-based decoding [7] divides |λmin | by α > 1,
reducing the overestimation of the min-sum approximation.
The optimal α value depends on the degree (dv , dc ) of
the LDPC, and may be determined by density evolution or
simulation. Results 0.1 dB from sum-product decoding for a
(3,6) length 8000 LDPC were achieved with α=1.25. Dividing
by α=1.25 is equivalent to multiplying by 0.8; approximating
with 0.75=0.5+0.25, multiplication can be accomplished with
addition of a one-bit and a two-bit left shift.
Since the original submission of this paper, an extension
of normalized BP-based decoding termed 2-D normalization,
introducing multiplicative values for variable nodes as well as
check nodes, has been presented [8]. This method optimizes
multiplicative values for each degree of variable and check
node, and is adapted to irregular codes, in a manner similar
to our degree-matched approximation presented in Section
III. However, unlike our approximation, 2-D normalization
requires optimization over l parameters, where l is the total
number of non-zero variable and check node degrees. In [8],
their normalization vector has l=9 multiplicative values. Optimizing over so many values is computationally intensive.
The 2-D normalization technique is not considered further
in this paper. Its very good performance is outweighed by
the significant computational cost of optimizing over many
parameters, as well as the cost of extending the normalization
to all variable nodes.
Offset BP-based decoding [7] subtracts a value β from the
minimum input LLR magnitude if |λmin | > β, otherwise
outputs a 0. The optimal value of β also depends on the degree
distribution of the LDPC, as determined from density evolution
or simulations. Each check node of the LDPC uses the same
value β. This approximation offers good performance, with
results 0.1 dB away from sum-product decoding for optimal
β, with minimal additional complexity. However, as the same
value β is used at each check node, an optimal value of β
must be determined for each degree distribution.
Determining a correction factor which expresses the difference between sum-product decoding and min-sum decoding
would allow subtraction of the exact amount from |λmin | to
compensate for the min-sum approximation overestimation.
Such a correction factor has been determined for a degree 3
check node [9]. For incoming LLRs λA and λB , the outgoing
extrinsic LLR LC may be expressed directly as the min-sum
approximation plus a correction factor as [10], [11]
Y
1 + e−|λA +λB |
LC = min |λk |
sign (λk ) + ln
.
k=A,B
1 + e−|λA −λB |
k=A,B
(3)
This correction factor requires the computation of exponents
of the magnitudes of the sum and difference of the two input
LLRs, and a final log computation.
Equation 3 may be applied exactly to a check node of degree
greater than 3 by subdividing it into multiple degree 3 check
nodes. This is possible because the parity check constraint
enforcement is commutative and associative [9], [13] and
obeys a recursive property
LD
LD
=
=
L(A ⊕ B ⊕ C) = L((A ⊕ B) ⊕ C),
L(L(A ⊕ B) ⊕ C) = L(A ⊕ L(B ⊕ C)), (4)
and similarly for higher degree nodes, as shown by induction.
The correction factor for any check node of degree dc > 3
may be calculated exactly with dc − 2 degree 3 check nodes.
A simple approximation to the correction factor of (3) is
given as [10]

if |λA + λB | < 2, |λA − λB | > 2|λA + λB |
 c
−c if |λA − λB | < 2, |λA + λB | > 2|λA − λB |
CF3 ≈

0
otherwise,
(5)
using the form of [11]. Good performance was obtained using
c = 0.5 [10], implemented recently in [12].
This approximation may be applied once at each check
node, with λA and λB as the two smallest magnitude input
LLRs. For dc > 3, there is some performance loss compared to
using a subdivided check node of dc −2 degree 3 check nodes,
but subdivision is not practical due to increased complexity
and area from replication.
A very similar approximation, with smaller and easily
implemented constraint range, was examined in [11], as

if |λA + λB | ≤ 1, |λA − λB | > 1
 0.5
−0.5 if |λA − λB | ≤ 1, |λA + λB | > 1 (6)
CF3 ≈

0
otherwise.
Use of approximations 5 and 6 as a correction factor to the
min-sum approximation is referred to as the modified min-sum
algorithm. Both approximations require threshold decisions
based on the difference between the two smallest magnitude
input LLRs, and suffer some performance loss when applied
to larger degree check nodes.
III. D EGREE -M ATCHED C HECK N ODE A PPROXIMATION
A general expression for check node decoding of arbitrary
degree check nodes, in the form of the minimum input LLR
plus a correction factor as in (3), may be derived. However,
this correction factor is quite complex; as presented in [14], it
is a combined function of exponentials of all the input LLRs,
requiring LUTs to compute both the exponentials and the log
of the final expression. In fact, there is negligible complexity
savings over sum-product decoding for a general correction
factor. Therefore, we look at an approximation to this general
correction factor which depends only on the degree of the
individual check node dc and λmin .
In [14], we showed that the maximum value of the general
correction factor is − ln(dc − 1) for large and equal magnitude
input LLRs λ,
λ→∞
CFmax −→= − ln(dc − 1).
(7)
For a dc =6 check node and equal magnitude λ, the correction
factor converges to − ln(dc − 1) for |λ| ≥ 1.5 ln(dc − 1).
If the input LLRs are close in magnitude, the correction
factor is well approximated by the maximum correction factor
value of ln(dc − 1). But if the input LLRs differ in magnitude,
3
the correction factor decreases, and approaches 0 if |λmin | |λmin,2 |, the next smallest magnitude LLR. Thus, instead of
the maximum correction factor, we use a fraction of it, to
widen the useful range of the approximation. A convenient
choice is ln(dc − 1)/2.
The degree-matched check node approximation subtracts a
positive correction factor CF from |λmin |, with CF < |λmin |,
thus avoiding any sign change.
Two-step Degree-Matched Check Node Approximation:
Check Node
Sum-Product
Two-step DM
One-step DM
Normalized
Offset
Min-Sum
A: (3,6)
1.11dB
1.20dB
1.21dB
α=1.25, 1.20dB [7]
β=0.15, 1.23dB [7]
1.71dB
B: (3,16)
2.69dB
2.77dB
2.78dB
α=1.33, 2.77dB
β=0.7, 2.76dB
2.97dB
C: irr
0.5dB
0.6dB
0.75dB
α=1.18, 1.15dB
β=0.35, 0.95dB
1.35dB
TABLE 1
D ENSITY E VOLUTION T HRESHOLDS IN SNR.
if |λmin |≥1.5 ln(dc −1),|λmin,2 |−|λmin |≤2, then
Lj→i
dc
ln(dc − 1) Y
= |λmin | −
sign (λk→j ) ;
2
k=1
k6=i
else if 1.5 ln(dc −1)>|λmin |≥3 ln(dc −1)/8,|λmin,2 |−|λmin |≤3, then
Lj→i =
dc
ln(dc − 1) Y
|λmin | −
sign (λk→j ) ;
4
k=1
k6=i
else
Lj→i = |λmin |
Qdc
k=1
k6=i
sign (λk→j ) ;
(8)
The two-step degree-matched check node approximation has
two decision thresholds to extend the useful range of the
approximation; both thresholds are based on |λmin | and its
distance from |λmin,2 |.
To further simplify this approximation, we consider a onestep degree-matched check node approximation with only one
decision threshold, dependent only on |λmin |.
One-step Degree-Matched Check Node Approximation:
if |λmin |≥3 ln(dc −1)/8, then
Lj→i
=
dc
ln(dc − 1) Y
|λmin | −
sign (λk→j ) ;
4
k=1
k6=i
else
Lj→i
=
|λmin |
Q dc
k=1
k6=i
sign (λk→j ) ;
(9)
The one-step degree-matched check node approximation is
simpler than either the two-step degree-matched approximation or the modified min-sum approximations of Section III,
as it eliminates the constraint on the distance between |λmin |
and |λmin,2 |. This constraint ensures that the approximation is
not applied when the actual correction factor is zero, which
occurs when |λmin | is significantly smaller than |λmin,2 |, and
its elimination means the approximation may be applied when
it is not needed, decreasing the extrinsic output below that
of sum-product decoding. However, the reduced complexity
of the one-step approximation, with only a single threshold
dependent only on |λmin |, is significant, justifying its consideration. Its complexity is equivalent to offset BP-based decoding;
however, the one-step degree-matched approximation does not
require parameter optimization based on degree distribution.
The optimal decision threshold is determined via density
evolution or simulations. Density evolution [15] calculates
the threshold for a degree distribution of regular or irregular
cycle-free LDPCs of infinite length, by tracking the iterative
evolution of the error probabilities out of the node algorithms,
independent of specific code structure. The threshold of a degree distribution is the signal-to-noise ratio (SNR) marking the
boundary below which the error probability fails to converge
to zero with increasing iterations.
The threshold is an asymptotic value, approached by very
long codes. However, the threshold provides a good measure of
comparison between different degree distributions, or different
decoding algorithms. In a manner similar to [7] for their
density evolution analysis of the offset-BP-based algorithm,
we examined different values of correction factor and decision
threshold for the one-step degree-matched approximation, and
determined the values used to be near-optimal over a large
range of dc in terms of threshold. Table 1 shows threshold
results in terms of SNR for three degree distributions: regular
rate 1/2 (3,6), rate 0.8125 (3,16) and a rate 1/2 irregular distribution with variable degree distribution λ2 =0.277, λ3 =0.283,
λ9 =0.44 and check distribution ρ6 =.016, ρ7 =.852, ρ8 =.132.
Results in Section IV use the same distributions.
Density evolution results show that, for regular codes, the
degree-matched approximations, normalized and offset BPbased approximations all have thresholds near sum-product
decoding. For the irregular distribution, however, the degreematched approximations show significantly better thresholds
than the normalized and offset approximations. This is expected, as the degree-matched approximations modify the
correction factor based on each check node degree, rather than
one global value for all check nodes.
IV. S IMULATIONS
Simulation results are presented for the degree-matched
approximations and some of the other check node decoding
algorithms described in this paper. Results are shown for a
rate 0.5 length 1024 regular (3,6) LDPC, a rate 0.8125 length
2048 regular (3,16) LDPC, and a rate 0.5 length 1000 irregular
LDPC. BPSK transmission of the all-zeros codeword over an
AWGN channel with noise variance N0 /2 was used.
Bit error rate (BER) performance results are shown vs SNR,
with SNR in dB defined as SNR=10 log10 (Eb /N0 ), where
Eb =Es /R. Each simulation point counts at least 50 frame
errors. A maximum of 64 decoding iterations was used.
The modified min-sum algorithm uses (6), with the constraint that |λmin | > 0.5 for subtraction of the correction factor
added to prevent sign-switching.
A. Floating Point Simulations
Figure 1 compares decoding performance of sum-product
decoding, min-sum decoding, normalized BP-based decoding
with α=1.25 and offset BP-based decoding with β=.15 (α and
4
β found optimal for (3,6) LDPCs in [7]), the modified min-sum
approximation of (6), the two-step degree-matched approximation of (8), and the one-step degree-matched approximation of
(9) for a length 1024 regular (3,6) LDPC.
0
10
−1
10
−2
10
0
BER
10
−3
−1
10
−2
10
10
−4
BER
10
−5
10
−3
10
−4
10
−5
10
1
Sum−Product Decoding
Two−Step Degree−Matched
One−Step Degree−Matched
Normalized, α=1.25
Modified Min−Sum
Offset, β=0.15
Min−Sum Approx
1.5
SNR=Eb/N0 in dB
0
Sum−Product Decoding
Two−Step Degree−Matched
One−Step Degree−Matched
Modified Min−Sum
Offset, β=0.35
Normalized, α=1.18
Min−Sum Approx
0.5
1
1.5
SNR=Eb/N0 in dB
2
2.5
Fig. 2. Degree-Matched Check Node Approximation Results, SNR vs BER
for Rate 0.5 N =1000 Irregular LDPC, Compared to Sum-Product, Offset,
Normalized, Modified Min-Sum and Min-Sum Decoding.
2
2.5
Fig. 1. Degree-Matched Check Node Approximation Results, SNR vs BER
for Rate 0.5 N =1024 Regular (3,6) LDPC, Compared to Sum-Product, MinSum, Normalized and Offset BP-based and Modified Min-Sum Decoding.
The two-step degree-matched check node approximation
provides sum-product decoding results or better at higher SNR.
The one-step degree-matched approximation shows slight performance loss compared to the two-step approximation, but
still achieves near-optimal results with low complexity. Approximate decoding of short codes at high SNR can sometimes
provide better results than sum-product decoding, as seen also
in [8] and [16]. Normalized BP-based decoding also achieves
sum-product performance. The modified-min-sum algorithm
loses 0.1 dB, offset-based BP decoding loses 0.15 dB, and the
min-sum algorithm loses 0.4 dB.
Results for a rate 0.5 length 1000 irregular LDPC of
degree distribution C in Table 1 are shown in Figure 2. The
degree-matched approximations are compared with min-sum,
normalized, offset and sum-product algorithms.
Sum-product performance is achieved at a BER ≤ 5 × 10−4
for the two-step and ≤ 10−4 for the one-step degree-matched
approximations, similar to the regular LDPC. Normalized
decoding sees a loss of 0.2 dB, while offset and modified minsum decoding show a loss of 0.05-1 dB, for BER ≤ 10−4 .
Figure 3 shows simulation results for a rate 0.8125 length
2048 regular (3,16) LDPC. The degree-matched approximations are compared with min-sum, modified min-sum, normalized, offset and sum-product algorithms. Sum-product decoding results are achieved by both the two-step and one-step
degree-matched and the normalized BP-based approximations
at BER ≤ 4 × 10−5 but show 0.1 dB loss at higher BER. The
modified min-sum algorithm has 0.1 dB loss, and the offset
BP-based approximation loses 0.05 dB for BER ≤ 10−4 .
B. Finite Precision Simulations
Finite precision simulations were also examined. Quantization to m bits, with one sign bit and m − 1 magnitude bits,
was used, resulting in 2m quantization bins. A maximum bin
value of 8 was chosen, where all channel LLRs |λch | ≥ 8 are
quantized to magnitude 8. The remaining quantization bins
are evenly spaced over the range [−B, B], with bin edges at
±lB/(2m−1 − 1), ∀l = 0, · · · , 2m−1 − 1. The center value of
each bin is used for quantizing LLRs within the bin.
The channel LLRs are quantized, as are all extrinsic LLRs
into and out of the variable and check nodes, with the same
precision and quantization bins. The decoding algorithms at
each node are performed using floating point operations.
Both the rate 0.5 (3,6) and the irregular LDPCs were examined. Sum-product decoding, two-step and one-step degreematched, normalized and offset BP-based and min-sum approximations were used. Finite precision simulations showed
that all algorithms required 6 bits of precision (1 sign, 5 magnitude bits) to achieve near-floating point results, for both codes.
At 5 bits, all algorithms saw about 0.1 dB loss compared
to floating point. With 4 bits, sum-product decoding and the
two-step degree-matched approximation show 0.4 dB loss in
performance for the irregular code and 0.15 dB loss for the
regular code, but the one-step degree-matched approximation
loses 0.65 dB compared to floating point. This increased
loss is due to larger bin sizes with lower bit precision; the
amount subtracted off, ln(dc − 1)/4, is too small to move the
quantized value to the next lowest bin. The one-step degreematched approximation converges to min-sum performance at
this point. Subtracting off the bin size instead counteracts
this, if B/(2m−1 − 1) > ln(dc − 1)/2, but only improves
performance by 0.1 dB.
C. Check Node ASIC Design Area
Four VHDL check node algorithms were synthesized for a
90nm CMOS process using the STMicroelectronics design kit
and Synopsys’ Design Compiler version 2004.12-SP4. Sumproduct decoding, min-sum, one-step degree-matched and the
normalized approximation were synthesized for a single degree 6 check node with 4-bit sign-magnitude parallel input
messages. The normalized approximation uses two different
approximation factors, one closer to the optimal 1/α 0.8 value,
5
Check Node Algorithm
Sum-Product
One-step Degree-Matched
Normalized, 1/α = .8125
Normalized, 1/α = 0.75
Min-Sum
−1
10
−2
BER
10
TABLE 2
C ORE A REA OF C HECK N ODE A LGORITHMS IN 90 NM CMOS PROCESS .
−3
10
−4
10
−5
10
2
Area µm2
2123
867
1014
823
612
Sum−Product Decoding
Two−Step Degree−Matched
One−Step Degree−Matched
Normalized, α=1.33
Offset, β=0.7
Modified Min−Sum
Min−Sum Approx
2.5
SNR=Eb/N0 in dB
3
3.5
consuming process, especially for multi-rate systems. The onestep degree-matched approximation shows good performance
for both regular and irregular distributions.
ACKNOWLEDGMENTS
Fig. 3. Degree-Matched Check Node Approximation Results, SNR vs BER
for Rate 0.8125 N =2048 Regular (3,16) LDPC, Compared to Sum-Product,
Modified Min-Sum, Normalized, Offset and Min-Sum Decoding.
and one which is easily implemented. Table 2 shows our area
estimates for the different check node decoding algorithms.
The min-sum check node requires the least area, at 29%
of the sum-product node area. The one-step degree-matched
check node has 41% the area of a sum-product node and
provides near-optimal performance. For a near-optimal value
of α, the normalized node takes 48% of the sum-product
area, but using an easily-implemented, less-optimal α value
reduces the normalized node to 39% of sum-product node
area, with minimal loss. However, for irregular codes such
as distribution C, there is significant performance loss for the
normalized approximation. For slight increase in area over a
min-sum node, the one-step degree-matched check node offers
performance near sum-product decoding for both regular and
irregular LDPCs.
V. C ONCLUSIONS
This paper examines several LDPC check node decoding
approximations which recoup much of the min-sum approximation’s loss, yet keep its simplicity. The degree-matched
check node approximations developed here are matched to
individual check node degrees. These approximations can
thus be applied to any regular or irregular LDPC, without
optimization of parameters to a specific degree distribution.
They are particularly well-suited to multi-rate applications.
Two versions of this approximation were presented, a twostep degree-matched approximation, and a simpler one-step
approximation. Both provide near-sum-product decoding results, with slight improvement for the two-step approximation.
The one-step approximation, however, is much simpler and
recoups nearly all the loss of the min-sum algorithm, in 41%
of the area required by the sum-product algorithm. Similar area
and performance can be obtained by the offset and normalized
approximations for regular LDPCs, but these approximations
show performance degradation for some irregular distributions.
Additionally, optimization of the offset and normalization
parameters must be determined by either density evolution or
simulation for every new degree distribution, which is a time-
Many thanks to Tyler Brandon Lee, Ramkrishna Swamy,
and Robert Hang, for Synopsys estimates and VHDL; to
Siavash Zeinoddin, for density evolution; to Dr. Stephen Bates,
for his steady interest in this project; and to the reviewers and
editors, for their helpful suggestions. Financial support from
Alberta iCORE and NSERC is gratefully acknowledged.
R EFERENCES
[1] R. G. Gallager, “Low-density parity-check codes”, IRE Trans. Information Theory, vol. 8, pp. 21-28, Jan. 1962.
[2] J. Pearl, Probabilistic Reasoning in Intelligent Systems: Networks of
Plausible Inference, Morgan Kaufman, San Mateo, CA, 1988.
[3] N. Wiberg, Codes and Decoding on General Graphs. PhD thesis,
Linköping University, Linköping, Sweden, 1996.
[4] M. P. C. Fossorier, M. Mihaljević and I. Imai, “Reduced complexity
iterative decoding of low-density parity-check codes based on belief
propagation”, IEEE Trans. Commun., pp. 673-680, May 1999.
[5] X.-Y. Hu and T. Mittelholzer, “An ordered-statistics-based approximation of the sum-product algorithm”, Proc. Intl. Telecommun. Sym. (ITS)
2002, Brazil, 2002.
[6] F. Guilloud, E. Boutillon, J.-L. Danger, “λ-Min Decoding Algorithm of
Regular and Irregular LDPC Codes”, Proc. of 3rd Int. Sym. on Turbo
Codes (ISTC 03), pp. 451-454, Brest, France, 1-5 Sept. 2003.
[7] J. Chen and M.P.C. Fossorier, “Density Evolution for Two Improved BPbased Decoding Algorithms of LDPC Codes”, IEEE Comm. Letters, vol.
6, no. 5, pp. 208-210, May 2002.
[8] J. Zhang, M. Fossorier, D. Gu and J. Zhang, “Two-dimensional Correction for Min-Sum Decoding of Irregular LDPC Codes”, IEEE Comm.
Letters, pp. 180-182, March 2006.
[9] G. Battail and H. M. S. El-Sherbini, “Coding for radio channels”, Ann.
Télécommun., vol. 37, pp. 75-96, Jan./Feb. 1982.
[10] E. Eleftheriou, T. Mittelholzer and A. Dholakia, “Reduced-complexity
decoding algorithm for low-density parity-check codes”, IEEE Electronic Letters, vol. 37, no. 2, pp. 102-104, Jan. 2001.
[11] A. Anastasopoulos, “A comparison between the sum-product and the
min-sum iterative detection algorithms based on density evolution”,
Proc. IEEE GlobeCom 2001, vol. 2, pp. 1021-1025, Nov. 25-29, 2001.
[12] L. Yang, H. Liu and C.-J. R. Shi, “Code Construction and FPGA
Implementation of a Low-Error-Floor Multi-Rate Low-Density ParityCheck Code Decoder”, IEEE Trans. Circuits & Systems I, vol. 53, no.
4, pp. 892-904, April 2006.
[13] J. Hagenauer, E. Offer and L. Papke, “Iterative Decoding of Binary
Block and Convolutional Codes”, IEEE Trans. on Inform. Theory, vol.
42, pp. 429-445, March 1996.
[14] S. Howard, C. Schlegel, V. Gaudet, “A Degree-Matched Check Node
Approximation for LDPC Decoding”, Proc. IEEE Intl. Sym. Inf. Theory
(ISIT) 2005, Adelaide, AU, Sept. 4-9, 2005.
[15] T.J. Richardson and R.L. Urbanke, “The Capacity of Low-Density
Parity-Check Codes Under Message-Passing Decoding”, IEEE Trans.
Information Theory, vol. 47, no. 2, pp. 599-618, Feb. 2001.
[16] J. Chen, A. Dholakia, E. Eleftheriou, M.P.C. Fossorier and X.-Y.
Hu, “Reduced-Complexity Decoding of LDPC Codes”, IEEE Trans.
Communications, vol. 53, no. 8, pp. 1288-1299, Aug. 2005.