Introduction to Coding Theory- From Decoding Aspect Yunghsiang S. Han Graduate Institute of Communication Engineering, National Taipei University Taiwan E-mail: [email protected] Y. S. Han Introduction to Coding Theory Digital Communication System Modulator ¾ Encoder ¾ Digitized information ? Channel ¾ Errors ? Demodulator - Decoder - Received information Graduate Instutute of Communication Engineering, National Taipei University 1 Y. S. Han Introduction to Coding Theory Channel Model 1. The time-discrete memoryless channel (TDMC) is a channel specified by an arbitrary input space A, an arbitrary output space B, and for each element a in A , a conditional probability measure on every element b in B that is independent of all other inputs and outputs. 2. An example of TDMC is the Additive White Gaussian Noise channel (AWGN channel). Another commonly encountered channel is the binary symmetric channel (BSC). Graduate Instutute of Communication Engineering, National Taipei University 2 Y. S. Han Introduction to Coding Theory AWGN Channel 1. Antipodal signaling is used in the transmission of binary signals over the channel. √ √ 2. A 0 is transmitted as + E and a 1 is transmitted as − E, where E is the signal energy per channel bit. 3. The input space is A = {0, 1} and the output space is B = R. 4. When a sequence of input elements (c0 , c1 , . . . , cn−1 ) is transmitted, the sequence of output elements (r0 , r1 , . . . , rn−1 ) will be √ cj rj = (−1) E + ej , j = 0, 1, . . . , n − 1, where ej is a noise sample of a Gaussian process with single-sided noise power per hertz N0 . 5. The variance of ej is N0 /2 and the signal-to-noise ratio (SNR) for the channel is γ = E/N0 . Graduate Instutute of Communication Engineering, National Taipei University 3 Y. S. Han Introduction to Coding Theory 6. P r(rj |cj ) = √ c √ (rj −(−1) j E)2 − N0 1 e πN0 4 . Graduate Instutute of Communication Engineering, National Taipei University Y. S. Han Introduction to Coding Theory 5 Probability distribution function for rj The signal energy per channel bit E has been normalized to 1. 0.5 0.45 0.4 0.35 0.3 0.25 0.2 0.15 0.1 0.05 0 P r(rj |1) -4 -2 P r(rj |0) 0 rj 2 4 Graduate Instutute of Communication Engineering, National Taipei University Y. S. Han Introduction to Coding Theory Binary Symmetric Channel 1. BSC is characterized by a probability p of bit error such that the probability p of a transmitted bit 0 being received as a 1 is the same as that of a transmitted 1 being received as a 0. 2. BSC may be treated as a simplified version of other symmetric channels. In the case of AWGN channel, we may assign p as Z ∞ p = P r(rj |1)drj 0 = = Z 0 −∞ ∞ Z 0 ³ P r(rj |0)drj √ √ (rj + E)2 − N0 1 e πN0 = Q (2E/N0 ) 1 2 drj ´ Graduate Instutute of Communication Engineering, National Taipei University 6 Y. S. Han Introduction to Coding Theory where Q(x) = Z ∞ x 0 1 1 − y2 √ e 2 dy 2π - q © p * H © H© ©HpH © j q H q © q H 1−p 1−p 0 1 Binary symmetric channel Graduate Instutute of Communication Engineering, National Taipei University 7 Y. S. Han Introduction to Coding Theory Binary Linear Block Code (BLBC) 1. An (n, k) binary linear block code is a k-dimensional subspace of the n-dimensional vector space V n = {(c0 , c1 , . . . , cn−1 )|∀cj cj ∈ GF (2)}; n is called the length of the code, k the dimension. 2. Example: a (6, 3) code C = {000000, 100110, 010101, 001011, 110011, 101101, 011110, 111000} Graduate Instutute of Communication Engineering, National Taipei University 8 Y. S. Han Introduction to Coding Theory Generator Matrix 1. An (n, k) BLBC can be specified by any set of k linear independent codewords c0 ,c1 ,. . . ,ck−1 . If we arrange the k codewords into a k × n matrix G, G is called a generator matrix for C. 2. Let u = (u0 , u1 , . . . , uk−1 ), where uj ∈ GF (2). c = (c0 , c1 , . . . , cn−1 ) = uG. 3. The generator matrix G′ of a systematic code has the form of [Ik A], where Ik is the k × k identity matrix. 4. G′ can be obtained by permuting the columns of G and by doing some row operations on G. We say that the code generated by G′ is an equivalent code of the generated by G. Graduate Instutute of Communication Engineering, National Taipei University 9 Y. S. Han Introduction to Coding Theory Example 1 0 G= 0 1 0 0 0 1 0 1 1 0 be a generator matrix of C and 1 H= 1 0 1 0 0 1 1 1 1 0 1 0 0 1 0 1 1 1 0 0 0 0 1 Graduate Instutute of Communication Engineering, National Taipei University 10 Y. S. Han Introduction to Coding Theory be a parity-check matrix of C. c = uG u ∈ {000, 100, 010, 001, 110, 101, 011, 111} C = {000000, 100110, 010101, 001011, 110011, 101101, 011110, 111000} Graduate Instutute of Communication Engineering, National Taipei University 11 Y. S. Han Introduction to Coding Theory Parity-Check Matrix 1. A parity check for C is an equation of the form a0 c0 ⊕ a1 c1 ⊕ . . . ⊕ an−1 cn−1 = 0, which is satisfied for any c = (c0 , c1 , . . . , cn−1 ) ∈ C. 2. The collection of all vectors a = (a0 , a1 , . . . , an−1 ) forms a subspace of V n . It is denoted by C ⊥ and is called the dual code of C. 3. The dimension of C ⊥ is n − k and C ⊥ is an (n, n − k) BLBC. Any generator matrix of C ⊥ is a parity-check matrix for C and is denoted by H. 4. cH T = 01×(n−k) for any c ∈ C. 5. Let G = [I k A]. Since cH T = uGH T = 0, GH T must be 0. If Graduate Instutute of Communication Engineering, National Taipei University 12 Y. S. Han Introduction to Coding Theory h i H = −AT I n−k , then GH T = = h T iT [I k A] −A I n−k −A = −A + A = 0k×(n−k) [I k A] I n−k Thus, the above H is a parity-check matrix. 6. Let c be the transmitted codeword and y is the binary received vector after quantization. The vector e = c ⊕ y is called an error pattern. 7. Let y = c ⊕ e. s = yH T = (c ⊕ e)H T = eH T which is called the syndrome of y. Graduate Instutute of Communication Engineering, National Taipei University 13 Y. S. Han Introduction to Coding Theory 8. Let S = {s|s = yH T for all y ∈ V n } be the set of all syndromes. Thus, |S| = 2n−k . Graduate Instutute of Communication Engineering, National Taipei University 14 Y. S. Han Introduction to Coding Theory Hamming Weight and Hamming Distance (1) 1. The Hamming weight (or simply called weight) of a codeword c, WH (c), is the number of 1’s ( the nonzero components) of the codeword. 2. The Hamming distance between two codewords c and c′ is defined as dH (c, c′ ) = the number of components in which c and c′ differ. 3. dH (c,0) = WH (c). 4. Let HW be the set of all distinct Hamming weights that codewords of C may have. Furthermore, let HD(c) be the set of all distinct Hamming distances between c and any codeword. Then, HW = HD(c) for any c ∈ C. 5. dH (c, c′ ) = dH (c ⊕ c′ , 0) = WH (c ⊕ c′ ) Graduate Instutute of Communication Engineering, National Taipei University 15 Y. S. Han Introduction to Coding Theory 16 6. If C and C ′ are equivalent to each other, then the HW for C is the same as that for C ′ . 7. The smallest nonzero element in HW is referred to as dmin . 8. Let the column vectors of H be {h0 , h1 , . . . , hn−1 }. cH T = T (c0 , c1 , . . . , cn−1 ) [h0 h1 · · · hn−1 ] = c0 h0 + c1 h1 + · · · + cn−1 hn−1 9. If c is of weight w, then cH T is a linear combination of w columns of H. 10. dmin is the minimum nonzero number of columns in H where a nontrivial linear combination results in zero. Graduate Instutute of Communication Engineering, National Taipei University Y. S. Han Introduction to Coding Theory Hamming Weight and Hamming Distance (2) C = {000000, 100110, 010101, 001011, 110011, 101101, 011110, 111000} HW = HD(c) = {0, 3, 4} for all c ∈ C dH (001011, 110011) = dH (111000, 000000) = WH (111000) = 3 Graduate Instutute of Communication Engineering, National Taipei University 17 Y. S. Han Introduction to Coding Theory 18 Digital Communication System Revisited u - ¨¥ r §¦ u {0, 1}k Encoder c - Modulator m(c) ¯² ¯ ² ¨¥ ¨¥ - r - r c m(c) § ¦ §¦ ± °± ° {0, 1}n {1, −1}n - AWGN channel - Demodulator r Decoder ĉ û - ¯² ¯¨ ¥ ² ¯² ¨¥ ¨¥ - r - r - r - r [ r ĉ û m(c) § ¦ § ¦ § ± °± °± ° ¦ Rn {1, −1}n {0, 1}n Graduate Instutute of Communication Engineering, National Taipei University {0, 1}k Y. S. Han Introduction to Coding Theory Maximum-Likelihood Decoding Rule (MLD Rule) for Word-by-Word Decoding (1) 1. The goal of decoding: set cb = cℓ where cℓ ∈ C and P r(cℓ |r) ≥ P r(c|r) for all c ∈ C. 2. If all codewords of C have equal probability of being transmitted, then to maximize P r(c|r) is equivalent to maximizing P r(r|c), where P r(r|c) is the probability that r is received when c is transmitted, since P r(r|c)P r(c) . P r(c|r) = P r(r) 3. A maximum-likelihood decoding rule (MLD rule), which minimizes error probability when each codeword is transmitted Graduate Instutute of Communication Engineering, National Taipei University 19 Y. S. Han Introduction to Coding Theory equiprobably, decodes a received vector r to a codeword cℓ ∈ C such that P r(r|cℓ ) ≥ P r(r|c) for all c ∈ C. Graduate Instutute of Communication Engineering, National Taipei University 20 Y. S. Han Introduction to Coding Theory Maximum-Likelihood Decoding Rule (MLD Rule) for Word-by-Word Decoding (2) For a time-discrete memoryless channel, the MLD rule can be formulated as set cb = cℓ where cℓ = (cℓ0 , cℓ1 , . . . , cℓ(n−1) ) ∈ C and n−1 Y j=0 P r(rj |cℓj ) ≥ n−1 Y j=0 P r(rj |cj ) for all c ∈ C. Let S(c,cℓ ) ⊆ {0, 1, . . . , n − 1} be defined as j ∈ S(c, cℓ ) iff cℓj 6= cj . Then the MLD rule can be written as set ĉ = cℓ where cℓ ∈ C and Graduate Instutute of Communication Engineering, National Taipei University 21 Y. S. Han Introduction to Coding Theory X j∈S(c,cℓ ) ln P r(rj |cℓj ) ≥ 0 for all c ∈ C. P r(rj |cj ) Graduate Instutute of Communication Engineering, National Taipei University 22 Y. S. Han Introduction to Coding Theory Maximum-Likelihood Decoding Rule (MLD Rule) for Word-by-Word Decoding (3) 1. The bit log-likelihood ratio of rj φj = ln P r(rj |0) . P r(rj |1) 2. let φ = (φ0 , φ1 , . . . , φn−1 ). The absolute value of φj is called the reliability of position j of received vector. 3. For AWGN channel P r(rj |0) = √ and √ (rj − E)2 − N0 1 e πN0 √ 2 (r + E) 1 − jN 0 e . P r(rj |1) = √ πN0 Graduate Instutute of Communication Engineering, National Taipei University 23 Y. S. Han Introduction to Coding Theory Since then √ 4 E P r(rj |0) rj , = φj = ln P r(rj |1) N0 √ 4 E φ= r. N0 4. For BSC, ln 1−p p φj = ln p 1−p : if rj = 0; : if rj = 1. Graduate Instutute of Communication Engineering, National Taipei University 24 Y. S. Han Introduction to Coding Theory Maximum-Likelihood Decoding Rule (MLD Rule) for Word-by-Word Decoding (4) X ln j∈S(c,cℓ ) ⇐⇒ ⇐⇒ ⇐⇒ ⇐⇒ 2 X P r(rj |cℓj ) ≥0 P r(rj |cj ) P r(rj |cℓj ) ≥0 P r(rj |cj ) ln j∈S(c,cℓ ) X ((−1)cℓj φj − (−1)cj φj ) ≥ 0 j∈S(c,cℓ ) n−1 X j=0 n−1 X j=0 cℓj (−1) φj ≥ n−1 X j=0 cℓj 2 (φj − (−1) (−1)cj φj ) ≤ n−1 X j=0 (φj − (−1)cj )2 Graduate Instutute of Communication Engineering, National Taipei University 25 Y. S. Han Introduction to Coding Theory 26 Maximum-Likelihood Decoding Rule (MLD Rule) for Word-by-Word Decoding (5) 1. The MLD rule can be written as n−1 X j=0 set cb = cℓ , where cℓ ∈ C and 2 (φj − (−1)cℓj ) ≤ for all c ∈ C. n−1 X j=0 2 (φj − (−1)cj ) 2. we will say that cℓ is the “closest” codeword to φ. Graduate Instutute of Communication Engineering, National Taipei University Y. S. Han Introduction to Coding Theory Maximum-Likelihood Decoding Rule (MLD Rule) for Word-by-Word Decoding (6) 1. Let m be a function such that m(c) = ((−1)c0 , . . . , (−1)cn−1 ). 2. Let hφ, m(c)i = and m(c). Pn−1 cj (−1) φj be the inner product between φ j=0 3. The MLD rule can be written as set cb = cℓ , where cℓ ∈ C and hφ, m(cℓ )i ≥ hφ, m(c)i for all c ∈ C. Graduate Instutute of Communication Engineering, National Taipei University 27 Y. S. Han Introduction to Coding Theory Maximum-Likelihood Decoding Rule (MLD Rule) for Word-by-Word Decoding (7) Let y = (y0 , y1 , . . . , yn−1 ) be the hard-decision of φ. That is 1 : if φ < 0; j yj = 0 : otherwise. Graduate Instutute of Communication Engineering, National Taipei University 28 Y. S. Han Introduction to Coding Theory n−1 X j=0 ⇐⇒ (−1)cℓj φj ≥ n−1 X (−1)cj φj j=0 n−1 1X [(−1)yj − (−1)cℓj ]φj ≤ 2 j=0 n−1 1X [(−1)yj − (−1)cj ]φj 2 j=0 ⇐⇒ ⇐⇒ n−1 X j=0 n−1 X j=0 (yj ⊕ cℓj )|φj | ≤ eℓj |φj | ≤ n−1 X j=0 n−1 X j=0 (yj ⊕ cj )|φj | ej |φj | Graduate Instutute of Communication Engineering, National Taipei University 29 Y. S. Han Introduction to Coding Theory Maximum-Likelihood Decoding Rule (MLD Rule) for Word-by-Word Decoding (8) 1. Let s = yH T be the syndrome of y. 2. Let E(s) be the collection of all error patterns whose syndrome is s. Clearly, |E(s)| = |C| = 2k . 3. The MLD rule can be stated as n−1 X j=0 set cb = y ⊕ eℓ , where eℓ ∈ E(s) and eℓj |φj | ≤ n−1 X j=0 ej |φj | for all e ∈ E(s). Graduate Instutute of Communication Engineering, National Taipei University 30 Y. S. Han Introduction to Coding Theory Distance Metrics of MLD Rule for Word-by-Word Decoding Let m be a function such that m(c) = ((−1)c0 , . . . , (−1)cn−1 ) and let y = (y0 , y1 , . . . , yn−1 ) be the hard decision of φ. That is 1 : if φ < 0; j yj = 0 : otherwise. 1. d2E (m(c), φ) = n−1 X j=0 2. hφ, m(c)i = n−1 X (φj − (−1)cj )2 (−1)cj φj j=0 Graduate Instutute of Communication Engineering, National Taipei University 31 Y. S. Han Introduction to Coding Theory 3. dD (c, φ) = n−1 X j=0 (yj ⊕ cj )|φj | = X j∈T 32 |φj |, where T = {j|yj 6= cj }. 4. Relations between the metrics: Pn−1 • d2E (m(c), φ) = j=0 φ2j − 2 hφ, m(c)i + n = kφk2 + n − 2 hφ, m(c)i • hφ, m(c)i = = X j∈T c |φj | − X j∈(T ∪T c ) = n−1 X j=0 X j∈T |φj | |φj | − 2 X j∈T |φj | |φj | − 2dD (c, φ) where T c is the complement set of T . Graduate Instutute of Communication Engineering, National Taipei University Y. S. Han Introduction to Coding Theory 5. When one only considers BSC, |φj | = | ln 1−p p | will be the same for all positions. Thus, n−1 X n−1 1−p X (yj ⊕ cj )|φj | = | ln dD (c, φ) = (yj ⊕ cj ) | p j=0 j=0 In this case, the distance metric will reduce to the Hamming Pn−1 distance between c and y, i.e., j=0 (yj ⊕ cj ) = dH (c, y). Under this condition, a decoder is called a hard-decision decoder; otherwise, the decoder is called a soft-decision decoder. Graduate Instutute of Communication Engineering, National Taipei University 33 Y. S. Han Introduction to Coding Theory Coding Gain [2] 1. R = 2. k n. Es . kEb = nEs → Eb = R where Eb is the energy per information bit and Es the energy per received symbol. 3. For a coding scheme, the coding gain at a given bit error probability is defined as the difference between the energy per information bit required by the coding scheme to achieve the given bit error probability and that by uncoded transmission. Graduate Instutute of Communication Engineering, National Taipei University 34 Y. S. Han Introduction to Coding Theory Graduate Instutute of Communication Engineering, National Taipei University 35 Y. S. Han Introduction to Coding Theory Error Probability for AWGN Channel 1. The word error probability Es with ML decoder is P r(Es ) ≈ = n X Aw Q((2wEs /N0 )1/2 ) w=dmin n X Aw Q((2wREb /N0 )1/2 ), w=dmin where Aw is the number of codewords with Hamming weight w. 2. The bit error probability Eb with ML decoder is P r(Eb ) ≈ n X w=dmin w Aw Q((2wREb /N0 )1/2 ) n 3. At moderate to high SNRs, the first term of the above upper bound is the most significant one. Therefore, the minimum Graduate Instutute of Communication Engineering, National Taipei University 36 Y. S. Han Introduction to Coding Theory 37 distance and the number of codewords of minimum weight of a code are two major factors which determine the bit error rate performance of the code. 4. Since Q(x) ≤ 2 √ 1 e−x /2 2πx for all x > 0, s dmin −(Rdmin γb ) e P r(Eb ) ≈ Admin 4πnkγb (1) where γb = Eb /N0 . 5. We may use Equation 1 to calculate bit error probability of a code at moderate to high SNRs. 6. The bit error probability of (48, 24) and (72, 36) binary extended quadratic residue (QR) codes are given in the next two slides [5]. Both codes have dmin = 12. Admin for the (48, 24) code is 17296 and for the (72, 36) code is 2982. Graduate Instutute of Communication Engineering, National Taipei University Y. S. Han Introduction to Coding Theory 38 Bit Error Probability of Extended QR Codes for AWGN Channel 1 uncoded data (48, 24) code 0.01 (72, 36) code 0.0001 BER 1e-06 1e-08 1e-10 1e-12 1 2 3 4 5 6 γb (dB) 7 8 9 Graduate Instutute of Communication Engineering, National Taipei University 10 Y. S. Han Introduction to Coding Theory 39 Bit Error Probability of the (48, 24) Extended QR Codes for AWGN Channel 0.1 0.01 0.001 0.0001 1e-05 BER 1e-06 1e-07 1e-08 1e-09 1e-10 1e-11 simulations Equation (1) 2 3 4 5 γb (dB) 6 7 8 Graduate Instutute of Communication Engineering, National Taipei University Y. S. Han Introduction to Coding Theory Asymptotic Coding Gain for AWGN Channel 1. At large x the Q function can be overbounded by −x2 /2 Q(x) ≤ e 2. An uncoded transmission has a bit error probability of ³ ´ P r(Eb ) = Q (2Eb′ /N0 )1/2 µ ¶ ′ E ≤ exp − b N0 3. At high SNRs for a linear block code with minimum distance dmin the bit error probability is approximated by the first term Graduate Instutute of Communication Engineering, National Taipei University 40 Y. S. Han Introduction to Coding Theory of the union bound. That is, P r(Eb ) ≈ ≤ 4. Let ´ ³ dmin 1/2 Admin Q (2dmin REb /N0 ) n ¶ µ dmin REb dmin Admin exp − n N0 ¶ µ ¶ µ dmin Eb′ dmin REb = exp − Admin exp − n N0 N0 5. Taking the logarithm of both sides and noting that ¤ £d min log n Admin is negligible for large SNR we have Eb′ = dmin R Eb 6. The asymptotic coding gain is Ga = 10 log[dmin R] Graduate Instutute of Communication Engineering, National Taipei University 41 Y. S. Han Introduction to Coding Theory Soft-Decision Decoding vs Hard-Decision Decoding 1. It can be shown that the asymptotic coding gain of hard-decision decoding is Ga = 10 log [R(dmin + 1)/2]. 2. Gdif f = 10 log[dmin R] − 10 log [R(dmin + 1)/2] · ¸ dmin R = 10 log R(dmin + 1)/2 ≈ 10 log[2] = 3 dB 3. Soft-decision decoding is about 3 dB more efficient than hard-decision decoding at very high SNRs. At realistic SNR, 2 dB is more common. 4. There exist fast algebraic decoding algorithms for powerful linear Graduate Instutute of Communication Engineering, National Taipei University 42 Y. S. Han Introduction to Coding Theory block codes such as BCH codes and Reed-Solomon codes. 5. The soft-decision decoding algorithms usually are more complex than the hard-decision decoding algorithms. 6. There are tradeoffs on bit error probability performance and decoding time complexity between these two types of decoding algorithms. Graduate Instutute of Communication Engineering, National Taipei University 43 Y. S. Han Introduction to Coding Theory Trellis of Linear Block Codes [1] 1. Let H be a parity-check matrix of C, and let hj , 0 ≤ j ≤ n − 1 be the column vectors of H. 2. Let c = (c0 , c1 , . . . , cn−1 ) be a codeword of C. With respect to this codeword, we recursively define the states st , 0 ≤ t ≤ n, as s0 = 0 and st = st−1 + ct−1 ht−1 = t−1 X j=0 3. sn = 0 for all codewords of C. cj hj , 1 ≤ t ≤ n. 4. The above recursive equation can be used to draw a trellis diagram. 5. In this trellis, s0 = 0 identifies the start node at level 0; sn = 0 Graduate Instutute of Communication Engineering, National Taipei University 44 Y. S. Han Introduction to Coding Theory identifies the goal node at level n; and each state st , 1 ≤ t ≤ n − 1 identifies a node at level t. 6. Each transition (branch) is labelled with the appropriate codeword bit ct . 7. There is a one-to-one correspondence between the codewords of C and the sequences of labels encountered when traversing a path in the trellis from the start node to the goal node. Graduate Instutute of Communication Engineering, National Taipei University 45 Y. S. Han Introduction to Coding Theory Example of Trellis Let 1 G= 0 0 0 0 1 1 1 0 1 0 0 1 0 1 be a generator matrix of C and let 1 1 0 H= 1 0 1 0 1 1 1 0 0 1 0 0 0 0 1 1 0 1 be a parity-check matrix of C. Graduate Instutute of Communication Engineering, National Taipei University 46 Y. S. Han Introduction to Coding Theory 47 level 0 (000)T s 0 s 0 s 0 s 0 s 0 s 0 s D B C ¢ 1¡ ££ ¢ ¡ D B £ C 1¢ s0 s ¡ D B £ C ¢ D B£ C ££ ¢ ¢¢ 1 D C £B 1 1 £ ¢s ¢ 1 1 1 D C £ £ ¢ £ B D C £ B £ £ ¢ D C £s 0 BBs £ 0 £ s¢ D C¢ £ £ D ¢C £ £ D 1 ¢ C s 0 s£ £ 1 D ¢ @ ¡ £ 1¡1 £ D ¢ @ D¢s 0 s¡ 0@s£ (001)T (010)T (011)T (101)T (110)T 1 2 3 4 5 6 Graduate Instutute of Communication Engineering, National Taipei University Y. S. Han Introduction to Coding Theory Convert a Decoding Problem to a Graph-Search Problem 1. According to the MLD rule presented, we want to find a n−1 X 2 codeword cℓ such that dE (m(cℓ ), φ) = (φj − (−1)cℓj )2 is the j=0 minimum among all the codewords. 2. If we properly specify the branch costs (branch metrics) on a trellis, the MLD rule can be written as follows: Find a path from the start node to a goal node such that the cost of the path is minimum among all the paths from the start node to a goal node, where a cost of a path is the summation of the cost of branches in the path. Such a path is called an optimal path. 3. In the trellis of C the cost of the branch from st−1 to st = Graduate Instutute of Communication Engineering, National Taipei University 48 Y. S. Han Introduction to Coding Theory st−1 + ct−1 ht−1 is assigned the value (φt−1 − (−1)ct−1 )2 . 4. The solution of the decoding problem is converted into finding a path from the start node to a goal node in the trellis, that is, a codeword c= (c0 , c1 , . . . , cn−1 ) such that d2E (m(c), φ) is minimum among all paths from the start node to a goal node. 5. The path metric for a codeword c= (c0 , c1 , . . . , cn−1 ) is defined as n−1 X j=0 (φj − (−1)cj )2 6. The ℓth partial path cost (metric) M ℓ (P m ) for a path P m is obtained by summing the branch metrics for the first ℓ branches of the path, where node m is the ending node of P m . Graduate Instutute of Communication Engineering, National Taipei University 49 Y. S. Han Introduction to Coding Theory 7. If P m has labels v 0 , v 1 , . . . , v ℓ , then ℓ M (P m ) = ℓ X ¡ i=0 ¢ vi 2 φi − (−1) Graduate Instutute of Communication Engineering, National Taipei University 50 Y. S. Han Introduction to Coding Theory Viterbi Decoding Algorithm (Wolf Algorithm) (1) 1. Wolf algorithm [7] finds an optimal path by applying the Viterbi algorithm [6] to search through the trellis for C. 2. In Wolf algorithm, each node in the trellis is assigned a cost. This cost is the partial path cost of the path starting at start node and ending at that node. 3. When more than one path enters a node at level ℓ in the trellis, one chooses the path which has the best (minimum) (ℓ − 1)th partial path cost as the cost of the node. The path with the best cost is the survivor. 4. The algorithm terminates when all nodes in the trellis have been labelled and their entering survivors determined. Graduate Instutute of Communication Engineering, National Taipei University 51 Y. S. Han Introduction to Coding Theory 5. Viterbi algorithm and Wolf algorithm are special types of dynamic programming, consequently, they are ML decoding algorithms. 6. This decoding algorithm uses a breadth-first search strategy to accomplish this search. That is, the nodes will be labelled level by level until the last level containing the goal node in the trellis. 7. The time and space complexities of Wolf algorithm are of O(n × min(2k , 2n−k )) [3], since it traverses the entire trellis. 8. Forney has given a procedure to reduce the number of states in the trellis for C [4] and now it becomes an active research area. Graduate Instutute of Communication Engineering, National Taipei University 52 Y. S. Han Introduction to Coding Theory Viterbi Decoding Algorithm (Wolf Algorithm)(2) Initialization: ℓ = 0. Let gℓ (m) be the partial path cost assigned to node m at level ℓ. gℓ (m) = 0. Step 1: Compute the partial metrics for all paths entering each node at level ℓ. Step 2: Set gℓ (m) equal to the best partial path cost entering the node m. Delete all non survivors from the trellis. Step 3: If ℓ = n, then stop and output the only survivor ending at the goal node; otherwise ℓ = ℓ + 1, step 1. Graduate Instutute of Communication Engineering, National Taipei University 53 Y. S. Han Introduction to Coding Theory Viterbi Decoding Algorithm (Wolf Algorithm) (3) Let 1 G= 0 0 0 0 1 1 1 0 1 0 0 1 0 1 be a generator matrix of C and let 1 1 0 H= 1 0 1 0 1 1 1 0 0 1 0 0 0 0 1 1 0 1 be a parity-check matrix of C. Assume the received vector is φ = (−2, −2, −2, −1, −1, 0) Graduate Instutute of Communication Engineering, National Taipei University 54 Y. S. Han Introduction to Coding Theory 55 level 0 1 2 3 4 5 6 (000)T 0s D D D D 9s C C 18 s 3s 7s 11 s s12 11 s s 15 (001)T (010)T (011)T (101)T (110)T D D D D D D C C C £ £ £ ££ ££ £ s C £ £11 £ C £ £ £ C £s s11£ £ s C ¢2 £ £ 15 ¢C £ £ ¢ C 10 D s £s £ D ¢ @ ¡11£ D ¢ ¡ @ £ s @£s Ds¢ ¡ 1 10 11 £ Graduate Instutute of Communication Engineering, National Taipei University Y. S. Han Introduction to Coding Theory Maximum-Likelihood Decoding Rule (MLD Rule) for Symbol-by-Symbol Decoding (1) 1. The goal of decoding: for 0 ≤ i ≤ k − 1, set ubi = s where P r(ui = s|r) ≥ P r(ui = 1 ⊕ s|r). 2. P r(ui = s|r) = = X c∈Tis X c∈Tis P r(c|r) P r(r|c) P r(c) , P r(r) where Tis = {c ∈ C|c = (u0 , . . . , ui−1 , s, ui+1 , . . . , uk−1 )G}. 3. If all codewords of C have equal probability of being Graduate Instutute of Communication Engineering, National Taipei University 56 Y. S. Han Introduction to Coding Theory transmitted, then ⇐⇒ ⇐⇒ P r(ui = s|r) ≥ P r(ui = 1 ⊕ s|r) X X P r(c) P r(c) P r(r|c) ≥ P r(r|c) P r(r) P r(r) c∈Tis c∈C\Tis X X P r(r|c) ≥ P r(r|c). c∈Tis c∈C\Tis 4. A maximum-likelihood decoding rule (MLD rule), which minimizes error probability when each codeword is transmitted equiprobably, decodes a received vector r to an information sequence u = (u0 , . . . , uk−1 ) such that X X P r(r|c) ≥ P r(r|c). c∈Tis c∈C\Tis Graduate Instutute of Communication Engineering, National Taipei University 57 Y. S. Han Introduction to Coding Theory Maximum-Likelihood Decoding Rule (MLD Rule) for Symbol-by-Symbol Decoding (2) 1. For a time-discrete memoryless channel, the MLD rule can be formulated as: for 0 ≤ i ≤ k − 1, s ∈ {0, 1}, X n−1 Y c∈Tis j=0 set ubi = s where P r(rj |cj ) ≥ X n−1 Y c∈C\Tis j=0 P r(rj |cj ). 2. Define the bit likelihood ratio of rj as φj = 3. Assume that Qn−1 j=0 P r(rj |0) . P r(rj |1) P r(rj |1) 6= 0. Then we can rewrite the above Graduate Instutute of Communication Engineering, National Taipei University 58 Y. S. Han Introduction to Coding Theory 59 inequality as X n−1 Y c∈Tis j=0 ⇐⇒ X n−1 Y c∈Tis j=0 P r(rj |cj ) ≥ c ⊕1 φjj ≥ X n−1 Y c∈C\Tis j=0 X n−1 Y P r(rj |cj ) c ⊕1 φjj c∈C\Tis j=0 Graduate Instutute of Communication Engineering, National Taipei University Y. S. Han Introduction to Coding Theory Binary Convolutional Codes 1. A binary convolutional code is denoted by a three-tuple (n, k, m). 2. n output bits are generated whenever k input bits are received. 3. The current n outputs are linear combinations of the present k input bits and the previous m × k input bits. 4. m designates the number of previous k-bit input blocks that must be memorized in the encoder. 5. m is called the memory order of the convolutional code. Graduate Instutute of Communication Engineering, National Taipei University 60 Y. S. Han Introduction to Coding Theory Encoders for the Convolutional Codes 1. A binary convolutional encoder is conveniently structured as a mechanism of shift registers and modulo-2 adders, where the output bits are modular-2 additions of selective shift register contents and present input bits. 2. n in the three-tuple notation is exactly the number of output sequences in the encoder. 3. k is the number of input sequences (and hence, the encoder consists of k shift registers). 4. m is the maximum length of the k shift registers (i.e., if the number of stages of the jth shift register is Kj , then m = max1≤j≤k Kj ). Pk 5. K = j=1 Kj is the total memory in the encoder (K is sometimes called the overall constraint lengths). Graduate Instutute of Communication Engineering, National Taipei University 61 Y. S. Han Introduction to Coding Theory 6. The definition of constraint length of a convolutional code is defined in several ways. The most popular one is m + 1. Graduate Instutute of Communication Engineering, National Taipei University 62 Y. S. Han Introduction to Coding Theory Encoder for the Binary (2, 1, 2) Convolutional Code u is the information sequence and v is the corresponding code sequence (codeword). -⊕P iP 6 PPP u = (11101) r - r ³³ ³ ³ -⊕) P -r ³³ bv 1 = (1010011) J ] J J b v = (11 01 10 01 00 10 11) ^ bv 2 = (1101001) Graduate Instutute of Communication Engineering, National Taipei University 63 Y. S. Han Introduction to Coding Theory 64 Encoder for the Binary (3, 2, 2) Convolutional Code c v 1 = (1000) c v 2 = (1100) ¡¢ u1 = (10) s u2 = (11) s¢ ¢ ¢ u = (11 01) - s s s Z ~⊕ c v = (0001) Z s Z 3 µ ¡ ¡ ¡ - v = (110 010 000 001) Graduate Instutute of Communication Engineering, National Taipei University Y. S. Han Introduction to Coding Theory 65 Impose Response and Convolution u1 .. . - ui Convolutional encoder .. . uk .. . - - v1 - vj .. . - vn 1. The encoders of convolutional codes can be represented by linear time-invariant (LTI) systems. Graduate Instutute of Communication Engineering, National Taipei University Y. S. Han Introduction to Coding Theory 66 2. (1) (2) (k) v j = u1 ∗ g j + u2 ∗ g j + · · · + uk ∗ g j = k X i=1 (k) ui ∗ g j , (i) where ∗ is the convolutional operation and gj is the impulse response of the ith input sequence with the response to the jth output. (i) 3. g j can be found by stimulating the encoder with the discrete impulse (1, 0, 0, . . .) at the ith input and by observing the jth output when all other inputs are fed the zero sequence (0, 0, 0, . . .). 4. The impulse responses are called generator sequences of the encoder. Graduate Instutute of Communication Engineering, National Taipei University Y. S. Han Introduction to Coding Theory Impulse Response for the Binary (2, 1, 2) Convolutional Code -⊕P iP 6 PPP u = (11101) r - r ³ ³ ³³ -⊕) P -r ³³ bv 1 = (1010011) J ] J J b v = (11 01 10 01 00 10 11) ^ bv 2 = (1101001) g1 = (1, 1, 1, 0, . . .) = (1, 1, 1), g 2 = (1, 0, 1, 0, . . .) = (1, 0, 1) v1 = (1, 1, 1, 0, 1) ∗ (1, 1, 1) = (1, 0, 1, 0, 0, 1, 1) v2 = (1, 1, 1, 0, 1) ∗ (1, 0, 1) = (1, 1, 0, 1, 0, 0, 1) Graduate Instutute of Communication Engineering, National Taipei University 67 Y. S. Han Introduction to Coding Theory 68 Impulse Response for the (3, 2, 2) Convolutional Code c v 1 = (1000) c v 2 = (1100) ¡¢ u1 = (10) s u2 = (11) s¢ ¢ ¢ u = (11 01) - s s s Z ~⊕ c v = (0001) Z s Z 3 µ ¡ ¡ ¡ - v = (110 010 000 001) (1) (2) (1) (2) (1) = 2 (octal), = 4 (octal), g 3 = 0 (octal), g 2 = 0 (octal), g 2 = 4 (octal)= (1, 0, 0), g 1 g1 (2) g3 = 3 (octal) Graduate Instutute of Communication Engineering, National Taipei University Y. S. Han Introduction to Coding Theory Generator Matrix in the Time Domain 1. The convolutional codes can be generated by a generator matrix multiplied by the information sequences. 2. Let u1 , u2 , . . . , uk are the information sequences and v 1 , v 2 , . . . , v n the output sequences. 3. Arrange the information sequences as u = (u1,0 , u2,0 , . . . , uk,0 , u1,1 , u2,1 , . . . , uk,1 , . . . , u1,ℓ , u2,ℓ , . . . , uk,ℓ , . . .) = (w0 , w1 , . . . , wℓ , . . .), Graduate Instutute of Communication Engineering, National Taipei University 69 Y. S. Han Introduction to Coding Theory and the output sequences as v = (v1,0 , v2,0 , . . . , vn,0 , v1,1 , v2,1 , . . . , vn,1 , . . . , v1,ℓ , v2,ℓ , . . . , vn,ℓ , . . .) = (z 0 , z 1 , . . . , z ℓ , . . .) 4. v is called a codeword or code sequence. 5. The relation between v and u can characterized as v = u · G, where G is the generator matrix of the code. Graduate Instutute of Communication Engineering, National Taipei University 70 Y. S. Han Introduction to Coding Theory 6. The generator matrix G G1 0 G0 G= is G2 G1 G0 with the k × n submatrices (1) g1,ℓ (2) g1,ℓ Gℓ = .. . (k) g1,ℓ (i) 71 ··· Gm ··· ··· .. . Gm−1 Gm Gm−2 Gm−1 (1) g2,ℓ ··· (1) gn,ℓ (2) g2,ℓ ··· (2) gn,ℓ (k) ··· gn,ℓ .. . g2,ℓ .. . (k) Gm .. . , . 7. The element gj,ℓ , for i ∈ [1, k] and j ∈ [1, n], are the impulse Graduate Instutute of Communication Engineering, National Taipei University Y. S. Han Introduction to Coding Theory response of the ith input with respect to jth output: (i) (i) (i) (i) (i) g j = (gj,0 , gj,1 , . . . , gj,ℓ , . . . , gj,m ). Graduate Instutute of Communication Engineering, National Taipei University 72 Y. S. Han Introduction to Coding Theory Generator Matrix of the Binary (2, 1, 2) Convolutional Code -⊕P iP 6 PPP u = (11101) r - r ³³ ³ ³ -⊕) P -r ³³ bv 1 = (1010011) J ] J J b v = (11 01 10 01 00 10 11) ^ bv 2 = (1101001) Graduate Instutute of Communication Engineering, National Taipei University 73 Y. S. Han Introduction to Coding Theory v = u·G = = (1, 1, 1, 0, 1) · 11 10 11 11 10 11 (11, 01, 10, 01, 00, 10, 11) 74 11 10 11 11 10 11 11 10 11 Graduate Instutute of Communication Engineering, National Taipei University Y. S. Han Introduction to Coding Theory 75 Generator Matrix of the Binary (3, 2, 2) Convolutional Code c v 1 = (1000) c v 2 = (1100) ¡¢- u1 = (10) s u2 = (11) s¢ ¢ ¢ - s s s Z ~⊕ c v = (0001) Z s Z 3 µ ¡ ¡ ¡ - u = (11 01) (1) v = (110 010 000 001) (2) (1) (2) g 1 = 4 (octal)= (1, 0, 0), g 1 = 0 (octal), g 2 = 0 (octal), g 2 = 4 Graduate Instutute of Communication Engineering, National Taipei University Y. S. Han Introduction to Coding Theory 76 (2) (1) (octal), g 3 = 2 (octal), g 3 = 3 (octal) v = u·G 100 001 010 001 = (11, 01) · 100 010 = (110, 010, 000, 001) 000 001 001 000 001 001 Graduate Instutute of Communication Engineering, National Taipei University Y. S. Han Introduction to Coding Theory Generator Matrix in the Z Domain 1. According to the Z transform, ui vj (i) gj ⊸ Ui (D) = ⊸ Vj (D) = ∞ X t=0 ∞ X ui,t Dt vj,t Dt t=0 ∞ X ⊸ Gi,j (D) = (i) gj,t Dt t=0 2. The convolutional relation of the Z transform Z{u ∗ g} = U (D)G(D) is used to transform the convolution of input sequences and generator sequences to a multiplication in the Z domain. Pk 3. Vj (D) = i=1 Ui (D) · Gi,j (D). Graduate Instutute of Communication Engineering, National Taipei University 77 Y. S. Han Introduction to Coding Theory 4. We can write the above equations into a matrix multiplication: V (D) = U (D) · G(D), where U (D) = (U1 (D), U2 (D), . . . , Uk (D)) V (D) = (V1 (D), V2 (D), . . . , Vn (D)) G(D) = Gi,j Graduate Instutute of Communication Engineering, National Taipei University 78 Y. S. Han Introduction to Coding Theory Generator Matrix of the Binary (2, 1, 2) Convolutional Code -⊕P iP 6 PPP u = (11101) r - r ³³ ³ ³ -⊕) P -r ³³ bv 1 = (1010011) J ] J J b v = (11 01 10 01 00 10 11) ^ bv 2 = (1101001) Graduate Instutute of Communication Engineering, National Taipei University 79 Y. S. Han Introduction to Coding Theory g1 = (1, 1, 1, 0, . . .) = (1, 1, 1), g2 = (1, 0, 1, 0, . . .) = (1, 0, 1) G1,1 (D) = 1 + D + D2 G1,2 (D) = 1 + D2 U1 (D) = 1 + D + D2 + D4 V1 (D) = 1 + D2 + D5 + D6 V2 (D) = 1 + D + D3 + D6 Graduate Instutute of Communication Engineering, National Taipei University 80 Y. S. Han Introduction to Coding Theory 81 Generator Matrix of the Binary (3, 2, 2) Convolutional Code c v 1 = (1000) c v 2 = (1100) ¡¢ u1 = (10) s u2 = (11) s¢ ¢ ¢ u = (11 01) - s s s Z ~⊕ c v = (0001) Z s Z 3 µ ¡ ¡ ¡ - v = (110 010 000 001) Graduate Instutute of Communication Engineering, National Taipei University Y. S. Han Introduction to Coding Theory (1) = (1, 0, 0), g 1 = (0, 0, 0), g 2 = (0, 0, 0) (2) = (1, 0, 0), g 3 = (0, 1, 0), g 3 = (0, 1, 1) g1 g2 (2) (1) (1) (2) G1,1 (D) = 1, G1,2 (D) = 0, G1,3 (D) = D G2,1 (D) = 0, G2,2 = 1, G2,3 (D) = D + D2 U1 (D) = 1, U2 (D) = 1 + D V1 (D) = 1, V2 (D) = 1 + D, V3 (D) = D3 Graduate Instutute of Communication Engineering, National Taipei University 82 Y. S. Han Introduction to Coding Theory Termination 1. The effective code rate, Reffective , is defined as the average number of input bits carried by an output bit. 2. In practice, the input sequences are with finite length. 3. In order to terminate a convolutional code, some bits are appended onto the information sequence such that the shift registers return to the zero. 4. Each of the k input sequences of length L bits is padded with m zeros, and these k input sequences jointly induce n(L + m) output bits. 5. The effective rate of the terminated convolutional code is now Reffective L kL =R , = n(L + m) L+m Graduate Instutute of Communication Engineering, National Taipei University 83 Y. S. Han Introduction to Coding Theory where L L+m is called the fractional rate loss. 6. When L is large, Reffective ≈ R. 7. All examples presented are terminated convolutional codes. Graduate Instutute of Communication Engineering, National Taipei University 84 Y. S. Han Introduction to Coding Theory 85 Truncation 1. The second option to terminate a convolutional code is to stop for t > L no matter what contents of shift registers have. 2. The effective code rate is still R. 3. The generator matrix is clipped G G1 · · · 0 G0 G1 .. . Gc[L] = after the Lth column: Gm ··· Gm .. . G0 .. Gm .. . . G0 G1 G0 , Graduate Instutute of Communication Engineering, National Taipei University Y. S. Han Introduction to Coding Theory where Gc[L] is an (k · L) × (n · L) matrix. 4. The drawback of truncation method is that the last few blocks of information sequences are less protected. Graduate Instutute of Communication Engineering, National Taipei University 86 Y. S. Han Introduction to Coding Theory Generator Matrix of the Truncation Binary (2, 1, 2) Convolutional Code -⊕P iP 6 PPP u = (11101) r - r ³³ ³ ³ -⊕) P -r ³³ b v 1 = (10100) J ] J J b v = (11 01 10 01 00) ^ b v 2 = (11010) Graduate Instutute of Communication Engineering, National Taipei University 87 Y. S. Han Introduction to Coding Theory v = u · Gc[5] = = (1, 1, 1, 0, 1) · 11 (11, 01, 10, 01, 00) 10 11 11 10 11 11 10 11 88 11 10 11 Graduate Instutute of Communication Engineering, National Taipei University Y. S. Han Introduction to Coding Theory 89 Generator Matrix of the Truncation Binary (3, 2, 2) Convolutional Code c v 1 = (10) c v 2 = (11) ¡¢ u1 = (10) s u2 = (11) s¢ ¢ ¢ u = (11 01) - s s s Z ~⊕ c v = (00) Z s Z 3 µ ¡ ¡ ¡ - v = (110 010) Graduate Instutute of Communication Engineering, National Taipei University Y. S. Han Introduction to Coding Theory v = u · Gc[2] = = 100 010 (11, 01) · (110, 010) 001 001 100 010 Graduate Instutute of Communication Engineering, National Taipei University 90 Y. S. Han Introduction to Coding Theory Tail Biting 1. The third possible method to generate finite code sequences is called tail biting. 2. Tail biting is to start the convolutional encoder in the same contents of all shift registers (state) where it will stop after the input of L information blocks. 3. Equal protection of all information bits of the entire information sequences is possible. 4. The effective rate of the code is still R. 5. The generator matrix has to be clipped after the Lth column and Graduate Instutute of Communication Engineering, National Taipei University 91 Y. S. Han Introduction to Coding Theory manipulated as follows: G0 G1 G0 c G̃[L] = Gm . .. .. . G1 · · · c ··· G1 .. . 92 Gm ··· Gm .. . G0 .. Gm .. . . G0 Gm G1 G0 , where G̃[L] is an (k · L) × (n · L) matrix. Graduate Instutute of Communication Engineering, National Taipei University Y. S. Han Introduction to Coding Theory Generator Matrix of the Tail Biting Binary (2, 1, 2) Convolutional Code -⊕P iP 6 PPP u = (11101) r - r ³³ ³ ³ -⊕) P -r ³³ b v 1 = (10100) J ] J J b v = (11 01 10 01 00) ^ b v 2 = (11010) Graduate Instutute of Communication Engineering, National Taipei University 93 Y. S. Han Introduction to Coding Theory v 94 c = u · G̃[5] = = 11 (1, 1, 1, 0, 1) · 11 10 (01, 10, 10, 01, 00) 10 11 11 10 11 11 10 11 11 11 10 11 Graduate Instutute of Communication Engineering, National Taipei University Y. S. Han Introduction to Coding Theory 95 Generator Matrix of the Tail Biting Binary (3, 2, 2) Convolutional Code c v 1 = (10) c v 2 = (11) ¡¢ u1 = (10) s u2 = (11) s¢ ¢ ¢ u = (11 01) - s s s Z ~⊕ c v = (00) Z s Z 3 µ ¡ ¡ ¡ - v = (110 010) Graduate Instutute of Communication Engineering, National Taipei University Y. S. Han Introduction to Coding Theory v = u· = = c G̃[2] 100 010 (11, 01) · 001 001 (111, 010) 001 001 100 010 Graduate Instutute of Communication Engineering, National Taipei University 96 Y. S. Han Introduction to Coding Theory FIR and IIR systems r-⊕ xru = (111) 6 r - ¾ ⊕? - b v 1 = (111) J ] J Jb -r v = (11 10 11) ^ r -⊕? b v = (101) 2 1. All examples in previous slides are finite impulse response (FIR) systems that are with finite impulse responses. 2. The above example is an infinite impulse response (IIR) system that is with infinite impulse response. Graduate Instutute of Communication Engineering, National Taipei University 97 Y. S. Han Introduction to Coding Theory 3. The generator sequences of the above example are (1) = (1) (1) = (1, 1, 1, 0, 1, 1, 0, 1, 1, 0, . . .). g1 g2 (1) 4. The infinite sequence of g2 of the encoder. is caused by the recursive structure 5. By introducing the variable x, we have xt v2,t = ut + xt−1 + xt−2 = xt + xt−2 . Accordingly, we can have the following difference equations: v1,t v2,t + v2,t−1 + v2,t−2 = ut = ut + ut−2 Graduate Instutute of Communication Engineering, National Taipei University 98 Y. S. Han Introduction to Coding Theory 99 6. We then apply the z transform to the second equation: ∞ X t=0 v2,t Dt + ∞ X t=0 v2,t−1 Dt + ∞ X v2,t−2 Dt = t=0 t=0 V2 (D) + DV2 (D) + D2 V2 (D) ∞ X = ut D t + ∞ X ut−2 Dt , t=0 U (D) + D2 U (D). 7. The system function is then 1 + D2 U (D) = G12 (D)U (D). V2 (D) = 1 + D + D2 8. The generator matrix is obtained: ´ ³ 2 1+D . G(D) = 1 1+D+D 2 Graduate Instutute of Communication Engineering, National Taipei University Y. S. Han Introduction to Coding Theory 100 9. V (D) = U (D) · G(D) ³ ¢ ¡ = 1 + D + D2 1 2 1+D 1+D+D 2 = (1 + D + D2 1 + D2 ) t σt σ t+1 v1 v2 0 00 10 1 1 1 10 01 1 0 2 01 00 1 1 ´ 10. The time diagram: Graduate Instutute of Communication Engineering, National Taipei University Y. S. Han Introduction to Coding Theory Code Trees of Convolutional Codes 1. A code tree of a binary (n, k, m) convolutional code presents every codeword as a path on a tree. 2. For input sequences of length L bits, the code tree consists of (L + m + 1) levels. The single leftmost node at level 0 is called the origin node. 3. At the first L levels, there are exactly 2k branches leaving each node. For those nodes located at levels L through (L + m), only one branch remains. The 2kL rightmost nodes at level (L + m) are called the terminal nodes. 4. As expected, a path from the single origin node to a terminal node represents a codeword; therefore, it is named the code path corresponding to the codeword. Graduate Instutute of Communication Engineering, National Taipei University 101 1/10 q0/01 q0/11 q 0/01 q0/11 q0/00 q 1/10 q 1/00 q0/10 q0/11 q 0/01 q 0/11 q0/00 q0/00 q 1/01 q 1/01 q0/01 q0/11 q 1/00 q 0/10 q0/11 q0/00 q 0/01 q 1/11 q0/10 q0/11 q 0/11 q 0/00 q0/00 q0/00 q 1/11 q 1/10 q0/01 q0/11 q 1/01 q 0/01 q0/11 q0/00 q 1/00 q 1/00 q0/10 q0/11 q 0/10 q 0/11 q0/00 q0/00 q 0/10 q 1/01 q0/01 q0/11 q 1/11 q 0/10 q0/11 q0/00 q 0/11 q 1/11 q0/10 q0/11 q 0/00 q 0/00 q0/00 q0/00 q q 1/10 q0/01 q0/11 q 1/10 q 0/01 q0/11 q0/00 q 1/01 q 1/00 q0/10 q0/11 q 0/01 q 0/11 q0/00 q0/00 q 1/11 q 1/01 q0/01 q0/11 q 1/00 q 0/10 q0/11 q0/00 q 0/10 q 1/11 q0/10 q0/11 q 0/11 q 0/00 q0/00 q0/00 q 0/00 q 1/10 q0/01 q0/11 q 1/01 q 0/01 q0/11 q0/00 q 1/11 q 1/00 q0/10 q0/11 q 0/10 q 0/11 q0/00 q0/00 q 0/00 q 1/01 q0/01 q0/11 q 1/11 q 0/10 q0/11 q0/00 q 0/00 q 1/11 q0/10 q0/11 q 0/00 q 0/00 q0/00 q0/00 q 1/10 q level 0 1 2 3 4 5 6 7 Y. S. Han Introduction to Coding Theory Trellises of Convolutional Codes 1. a code trellis as termed by Forney is a structure obtained from a code tree by merging those nodes in the same state. 2. Recall that the state associated with a node is determined by the associated shift-register contents. 3. For a binary (n, k, m) convolutional code, the number of states at Pk K levels m through L is 2 , where K = j=1 Kj and Kj is the length of the jth shift register in the encoder; hence, there are 2K nodes on these levels. 4. Due to node merging, only one terminal node remains in a trellis. 5. Analogous to a code tree, a path from the single origin node to the single terminal node in a trellis also mirrors a codeword. Graduate Instutute of Communication Engineering, National Taipei University 102 Y. S. Han Introduction to Coding Theory 103 10 si 10 si 10 si si 3 3 3 3 ¡ A01 ¡ A01 ¡ A01 ¡ A01 ¡ ¡ ¡ ¡ A 01 01A 01A 01A ¡ ¡ ¡ ¡ si si si si si 1 1 A 1 A 1 A 1 A 10 10 A 10 A 10 A 10 A @ ¡¢ @ ¡¢ @ ¢ @ ¢ @ ¡ ¢ @¢ 00 ¡ @A¢ 00 ¡ @A¢ 00 ¡ @A¢ @A ¢ ¡ ¢@A i @A i ¡ ¢@A i ¡ ¢@A i ¢ ¢@si s2 11 s2 11 s2 11 s2 11 2 11 ¢ ¢ ¢ ¢ ¢ @ @ @ @ @ Origin ¢ Terminal ¢ ¢@ ¢@ ¢@ @ @ node11i ¢ 00 11i ¢ 00 11i ¢ 00@ 11i ¢ 00@ 11i ¢ 00@ i 00@ i 00@node s0 s0 s0 s0 s0 s0 s0 si 0 level 0 1 2 3 4 5 6 Graduate Instutute of Communication Engineering, National Taipei University 7 Y. S. Han Introduction to Coding Theory 104 System Modela e u- Encoder v- BPSK modulator x -⊕?r- Decoder û or v̂ or - x̂ For an (n, 1, m) convolutional code: • u = (u1 , u2 , . . . , uL ) ∈ {0, 1}L is the information bit sequence.b • v = (v1 , v2 , . . . , vN ) ∈ {0, 1}N is the code bit sequence. • x = (x1 , x2 , . . . , xN ) ∈ {−1, 1}N is the BPSK-modulated code bit sequence, where xj = (−1)vj . • e = (e1 , e2 , . . . , eN ) ∈ ℜN is a zero-mean i.i.d Gaussian vector a All material of MAP algorithm are adapted from the lecture note by Prof. Po-Ning Chen b The indices of all sequences start from 1. Graduate Instutute of Communication Engineering, National Taipei University Y. S. Han Introduction to Coding Theory (AWGN). • r = (r1 , r2 , . . . , rN ) ∈ ℜN is the received vector, where rj = xj + ej . • û = (û1 , û2 , . . . , ûL ) ∈ {0, 1}L is reconstructed information bit sequence. Graduate Instutute of Communication Engineering, National Taipei University 105 Y. S. Han Introduction to Coding Theory 106 BCJR (MAP) Algorithm (1) 1. Pr{ui = 0|r} = X Pr (0) (Ssi−1 ,Ss̄i )∈Bi = X Pr (0) (Ssi−1 ,Ss̄i )∈Bi © © ¯ ª Ssi−1 , Ss̄i ¯ r ª Ssi−1 , Ss̄i , r Pr {r} , (0) where Bi is the set of transition from nodes at level i − 1 to nodes at level i, which corresponds to input ui = 0. Graduate Instutute of Communication Engineering, National Taipei University Y. S. Han Introduction to Coding Theory X Pr{ui = 1|r} = Pr (1) (Ssi−1 ,Ss̄i )∈Bi X = Pr (1) (Ssi−1 ,Ss̄i )∈Bi © © 107 ¯ ª Ssi−1 , Ss̄i ¯ r ª Ssi−1 , Ss̄i , r Pr {r} , (1) where Bi is the set of transition from nodes at level i − 1 to nodes at level i, which corresponds to input ui = 1. Therefore, X © i−1 i ª Pr Ss , Ss̄ , r (1) Λ(i) = log (Ssi−1 ,Ss̄i )∈Bi X Pr (0) (Ssi−1 ,Ss̄i )∈Bi © ª Ssi−1 , Ss̄i , r Graduate Instutute of Communication Engineering, National Taipei University Y. S. Han Introduction to Coding Theory 108 2. Define α(Ss̄i ) β(Ss̄i ) γ(u, Ssi−1 , Ss̄i ) △ = Pr{Ss̄i , rin 1 } © N ¯ iª △ = Pr rin+1 ¯ Ss̄ ¯ n o △ ¯ i−1 i in = Pr ui = u, Ss̄ , r(i−1)n+1 ¯ Ss , u = 0, 1, where rba = (ra , ra+1 , . . . , rb ). Graduate Instutute of Communication Engineering, National Taipei University Y. S. Han Introduction to Coding Theory Pr = = = = = = = = Pr Pr n n n Ssi−1 , Ss̄i , r o (i−1)n N , rin Ssi−1 , Ss̄i , r1 (i−1)n+1 , rin+1 ¯ o ¯ i−1 i (i−1)n in rN , Ss̄ , r1 , r(i−1)n+1 in+1 ¯ Ss n (i−1)n Ssi−1 , Ss̄i , r1 , rin (i−1)n+1 o o ×Pr ¯ o n n o ¯ i (i−1)n N in i i−1 Pr rin+1 ¯ Ss̄ Pr Ss , Ss̄ , r1 , r(i−1)n+1 o n (i−1)n in i i−1 i , r(i−1)n+1 β(Ss̄ ) · Pr Ss , Ss̄ , r1 ¯ n o n o ¯ i−1 (i−1)n (i−1)n i i in i−1 β(Ss̄ ) · Pr Ss̄ , r(i−1)n+1 ¯ Ss , r1 Pr Ss , r1 ¯ o n n o ¯ i−1 (i−1)n (i−1)n i i in i−1 β(Ss̄ )Pr Ss̄ , r(i−1)n+1 ¯ Ss , r1 Pr Ss , r1 ¯ n o o n ¯ i−1 (i−1)n in i i i−1 β(Ss̄ )Pr Ss̄ , r(i−1)n+1 ¯ Ss Pr Ss , r1 α(Ssi−1 )β(Ss̄i ) 1 X γ(u, Ssi−1 , Ss̄i ) u=0 Graduate Instutute of Communication Engineering, National Taipei University 109 Y. S. Han Introduction to Coding Theory 3. Derivation of α: α(S00 ) = 1; α(S10 ) = α(S20 ) = α(S30 ) = 0. For i = 1, . . . , L, i α(Ss̄ ) △ = = = = Pr n 3 X o Pr i−1 i in Ss , Ss̄ , r1 s=0 n 3 X Pr s=0 ½ (i−1)n in i−1 i , r(i−1)n+1 Ss , Ss̄ , r1 3 X Pr ½ i−1 (i−1)n Ss , r1 3 X i−1 α(Ss )Pr ½ 3 X i−1 α(Ss )Pr n s=0 = i in Ss̄ , r1 s=0 = s=0 = 3 X o ¾ Pr ½ ¾ ¾ ¯ ¯ i−1 (i−1)n i in , r1 Ss̄ , r(i−1)n+1 ¯ Ss ¾ ¯ ¯ i−1 (i−1)n in i , r1 Ss̄ , r(i−1)n+1 ¯ Ss ¯ o ¯ i−1 i in Ss̄ , r(i−1)n+1 ¯ Ss 1 i−1 X i−1 i α(Ss ) γ(u, Ss , Ss̄ ). s=0 u=0 4. Derivation of β: β(S0L ) = 1; β(S1L ) = β(S2L ) = β(S3L ) = 0. Graduate Instutute of Communication Engineering, National Taipei University 110 Y. S. Han Introduction to Coding Theory For i = L − 1, . . . , 0, i β(Ss̄ ) △ = = Pr ½ 3 X ¯ ¾ ¯ i N̄ rin+1 ¯¯ Ss̄ Pr ½ Pr n s=0 = 3 X s=0 = = = ¯ ¾ ¯ i i+1 N̄ ¯ Ss , rin+1 ¯ Ss̄ i , S i+1 , rN̄ Ss̄ s in+1 n o i Pr Ss̄ o ¾ (i+1)n N̄ i i+1 ,r ,r 3 Pr Ss̄ , Ss X in+1 (i+1)n+1 o n i Pr Ss̄ s=0 ¯ ½ ¾ ½ ¾ ¯ i (i+1)n (i+1)n N̄ i+1 i i+1 ¯ ,r Pr Ss̄ , Ss ,r 3 Pr r(i+1)n+1 ¯ Ss̄ , Ss X in+1 in+1 o n i Pr Ss̄ s=0 ¯ ½ ¾ ½ ¾ ¯ i+1 (i+1)n N̄ i i+1 ¯ Pr Ss̄ , Ss ,r 3 Pr r(i+1)n+1 ¯ Ss X in+1 o n i Pr Ss̄ s=0 ½ Graduate Instutute of Communication Engineering, National Taipei University 111 Y. S. Han Introduction to Coding Theory = = = = ½ i+1 β(Ss )Pr ½ ¾ i , S i+1 , r(i+1)n S 3 s̄ s X in+1 n o i Pr Ss̄ s=0 ¯ ¾ ½ n o i+1 )Pr S i+1 , r(i+1)n ¯¯ S i Pr S i β(S 3 s̄ s s X in+1 ¯ s̄ n o i Pr Ss̄ s=0 3 X s=0 = i+1 )Pr β(Ss ¯ ¾ i+1 (i+1)n ¯¯ i Ss ,r S in+1 ¯ s̄ ¯ ½ ¾ 1 i+1 X i+1 (i+1)n ¯¯ i β(Ss ) S Pr u, Ss ,r in+1 ¯ s̄ s=0 u=0 3 X 3 X 1 i+1 X i i+1 β(Ss ) γ(u, Ss̄ , Ss ). s=0 u=0 5. Derivation of γ: β(S0L ) = 1; β(S1L ) = α(S2L ) = α(S3L ) = 0. Graduate Instutute of Communication Engineering, National Taipei University 112 Y. S. Han Introduction to Coding Theory i−1 i γ(u, Ss , Ss̄ ) △ = = = = = = ¯ o ¯ i−1 i in u, Ss̄ , r(i−1)n+1 ¯ Ss ½ ¾ i−1 , S i , rin Pr u, Ss s̄ (i−1)n+1 i−1 Pr{Ss } ¯ ½ ¾ n o ¯ ¯ u, S i−1 , S i Pr u, S i−1 , S i Pr rin ¯ s s̄ s s̄ (i−1)n+1 i−1 Pr{Ss } ¯ ¾ ½ n o ¯ ¯ u Pr u, S i−1 , S i Pr rin s s̄ (i−1)n+1 ¯ Pr n i−1 Pr{Ss } ¯ ½ ¾ n o ¯ in i−1 , S i ¯x Pr rin Pr u, S s s̄ (i−1)n+1 ¯ (i−1)n+1 i−1 Pr{Ss } ¯ ¾ ½ n ¯ o n o ¯ in ¯ i−1 i Pr S i−1 , S i in ¯ x S , S Pr u Pr r ¯ s s̄ s s̄ (i−1)n+1 ¯ (i−1)n+1 i−1 Pr{Ss } Graduate Instutute of Communication Engineering, National Taipei University 113 Y. S. Han Introduction to Coding Theory = = = = ¯ ¯ o o n o n ¯ n ¯ i−1 ¯ in i i ¯ i−1 in , Ss̄ Pr Ss̄ ¯ Ss Pr r(i−1)n+1 ¯ x(i−1)n+1 Pr u ¯ Ss ¯ ½ ¾ ¯ o n ¯ in i−1 , S i ) ∈ B(u) ; Pr S i ¯¯ S i−1 Pr rin ¯x , (Ss s s̄ s̄ i (i−1)n+1 ¯ (i−1)n+1 0, otherwise; P ¯ n o ¯ i−1 in 2 i (r − x (u)) Pr Ss̄ ¯ Ss j j=(i−1)n+1 j i−1 , S i ) ∈ B(u) ; − , (Ss exp √ s̄ i 2 2πσ 2σ 0, otherwise; P © ª in (rj − xj (u))2 Pr u = u j=(i−1)n+1 i i−1 , S i ) ∈ B(u) ; , (Ss exp − √ s̄ i 2 2πσ 2σ 0, otherwise. Note: • xj is indeed a function of u (or ui ); hence, we denote it by xj (u). © ¯ i−1 i ª • Pr u ¯Ss̄ , Ss is either 1 or 0. © i ¯ i−1 ª can be chosen as 1/K, where K • The initial value of Pr Ss̄ ¯ Ss is the number of edges on trellis, which starts from node Ss̄i−1 and ends at a node at level i. Graduate Instutute of Communication Engineering, National Taipei University 114 Y. S. Han Introduction to Coding Theory 115 BCJR (MAP) Algorithm (2) step 1: Forward recursion step 1-1: • α(S00 ) = 1; α(S10 ) = α(S20 ) = α(S30 ) = 0. step 1-2: • For i = 1, . . . , L, s = 0, 1, 2, 3, s̄ = 0, 1, 2, 3, u = 0, 1, compute γ(u, Ssi−1 , Ss̄i ) Pin (rj − xj (u))2 j=(i−1)n+1 Pr {ui = u} − 2σ 2 √ , e = 2πσ 0, (u) (Ssi−1 , Ss̄i ) ∈ Bi ; otherwise. step 1-3: Graduate Instutute of Communication Engineering, National Taipei University Y. S. Han Introduction to Coding Theory • For i = 1, . . . , L and s̄ = 0, 1, 2, 3, α(Ss̄i ) = 3 X α(Ssi−1 ) 1 X γ(u, Ssi−1 , Ss̄i ). u=0 s=0 step 2: Backward recursion step 2-1: • β(S0L ) = 1; β(S1L ) = β(S2L ) = β(S3L ) = 0. step 2-2: • For i = L − 1, . . . , 0 and s̄ = 0, 1, 2, 3, compute β(Ss̄i ) = 3 X s=0 β(Ssi+1 ) 1 X γ(u, Ss̄i , Ssi+1 ). u=0 Graduate Instutute of Communication Engineering, National Taipei University 116 Y. S. Han Introduction to Coding Theory 117 step 3: Soft decision. For i = 1, . . . , L, Λ(i) = log X α(Ssi−1 )β(Ss̄i ) α(Ssi−1 )β(Ss̄i ) (0) i−1 i )∈B (Ss ,Ss̄ i γ(u, Ssi−1 , Ss̄i ) u=0 (1) i−1 i )∈B (Ss ,Ss̄ i X 1 X 1 X . γ(u, Ssi−1 , Ss̄i ) u=0 Graduate Instutute of Communication Engineering, National Taipei University Y. S. Han Introduction to Coding Theory Some Important Codes 1. Algebraic codes– cyclic codes, Bose-Chaudhuri-Hocquenghem codes (BCH codes), Reed-Solomon codes (RS codes), Algebraic-Geometric codes (AG codes). 2. Codes for bandwidth-limited channels- Ungerboeck codes. 3. Codes approaching Shannon bound- turbo codes, low-density-parity-check codes (LDPC codes). 4. Codes for multi-input (multiple transmit antennas) multi-output (multiple receive antennas) channels- Space-time codes. Graduate Instutute of Communication Engineering, National Taipei University 118 Y. S. Han Introduction to Coding Theory References [1] L. R. Bahl, J. Cocke, F. Jelinek, and J. Raviv, Optimal Decoding of Linear Codes for Minimizing Symbol Error Rate IEEE Trans. Inform. Theory, pp. 284–287, March 1974. [2] M. Bossert, Channel Coding for Telecommunications. New York, NY: John Wiley and Sons, 1999. [3] J. H. Conway and N. J. A. Sloane, Soft Decoding Techniques for Codes and Lattices, Including the Golay Code and the Leech Lattice IEEE Trans. Inform. Theory, pp. 41–50, January 1986. [4] G. D. Forney, Jr., Coset Codes–Part II: Binary Lattices and Related Codes IEEE Trans. Inform. Theory, pp. 1152–1187, September 1988. Graduate Instutute of Communication Engineering, National Taipei University 119 Y. S. Han Introduction to Coding Theory [5] Y. S. Han, Efficient Soft-Decision Decoding Algorithms for Linear Block Codes Using Algorithm A*, PhD thesis, School of Computer and Information Science, Syracuse University, Syracuse, NY 13244, 1993. [6] A. J. Viterbi, Error bound for convolutional codes and an asymptotically optimum decoding algorithm IEEE Trans. Inform. Theory, vol. IT-13, no. 2, pp. 260–269, April 1967. [7] J. K. Wolf, Efficient Maximum Likelihood Decoding of Linear Block Codes Using a Trellis IEEE Trans. Inform. Theory, pp. 76–80, January 1978. Graduate Instutute of Communication Engineering, National Taipei University 120
© Copyright 2026 Paperzz