1 / 22

Example 2-input 4-output DMC

Example 2-input 4-output DMC. Example. Example BSC. Transition probability 0.3’ Recall: MLD decodes to closest codeword in Hamming distance Note: extremely poor channel! Channel capacity = 0.12. Example. 1. 3. 2. 4. 4. 4. 5. 4. 6. 5. 7. 4. 5. 2. 5. 3. 4. 2.

rusti
Download Presentation

Example 2-input 4-output DMC

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Example 2-input 4-output DMC

  2. Example

  3. Example BSC • Transition probability 0.3’ • Recall: MLD decodes to closest codeword in Hamming distance • Note: extremely poor channel! • Channel capacity = 0.12

  4. Example 1 3 2 4 4 4 5 4 6 5 7 4 5 2 5 3 4 2 3 6 6

  5. The Viterbi algorithm for the binary-input AWGN channel • Recall from Chapter 10: • Metrics: Correlation • Decode to the codeword v whose correlation rv with r is maximum • Symbol metrics: rv

  6. Performance bounds • Linear code, and symmetric DMC: • We can w. l. o. g. assume that the all zero codeword is transmitted • ..and that if we make a mistake, the wrong codeword is something else • Thus the decoding mistake will take place at the time instant in the trellis when the wrong codeword and the zero codeword merge • Therefore, interested in the elementary codewords (first event errors)

  7. More about first event errors

  8. Performance bounds: BSC • Then the first-event error probability at time t is • Assume that the wrong codeword has weight d Pd: Channel specific

  9. Simplifications of Pd B Pb (E)

  10. Performance bounds: Binary-input AWGN • Asymptotic coding gain  = 10 log 10 (Rdfree/2) • BSC derived from AWGN:

  11. Pd for binary-input AWGN v correct: (-1,...,-1) v’ incorrect Sum of d ind.Gaussian random variables (-1,N0/2Es) Count only positions where v’ and v differ

  12. Pd for binary-input AWGN • Asymptotic coding gain  = 10 log 10 (Rdfree)

  13. Rate 1/3 convolutional code 3.3 dB • = 10 log10 (1/3*7) = 3.68dB

  14. Some further comments Replace this by a bound • Union bound poor for small SNR • Contributions from higher order terms becomes significant • When to stop the summation? • Other bounds: Divide summation in two parts near/far

  15. Performance: Optimum R=1/2 codes on AWGN

  16. Performance: Optimum R=1/2 codes on Hard quantized AWGN

  17. R=1/2, =4 code, varying quantization

  18. Rate 1/3 codes

  19. Rate 2/3 codes

  20. Code design: Computer search • Criteria • High dfree • Low Adfree (Low Bdfree)? • High slope/low truncation length* • Use Viterbi algorithm to determine the weight properties • Example:

  21. Example: Viterbi algorithm for distance properties calculation 3 4 6 5 7 6 8 7 5 7 6 7 7 7 7

  22. Suggested exercises • 12.6-12.21

More Related