1 / 30

exercise in the previous class (Apr. 26)

exercise in the previous class (Apr. 26). For s = 110010111110001000110100101011101100 (| s |=36), compute  2 -values of s for block length with 1, 2, 3 and 4. n = 1: 0 and 1 are expected to appear 18 times each. s contains |0| =17, |1| = 19,  2 = 1 2 /18+ 1 2 /18=1/9

jenn
Download Presentation

exercise in the previous class (Apr. 26)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. exercisein the previous class (Apr. 26) • For s = 110010111110001000110100101011101100 (|s|=36), compute 2-values of s for block length with 1, 2, 3 and 4. • n = 1: 0 and 1 are expected to appear 18 times each. s contains |0| =17, |1| = 19, 2 = 12/18+12/18=1/9 • n = 2: four patterns × 4.5 times. s contains |00| =5, |01| = 1,|10| =6, |11| = 6, 2 = 0.52/4.5 + 3.52/4.5 + 1.52/4.5 + 1.52/4.5 = 3.78 • n = 3: 2 = 2.67, n=4: 2 = 14.11 • Implement the linear congruent method as computer program.  see the excel file on http://apal.naist.jp/~kaji/lecture/

  2. chapter 3:coding for noisy communication

  3. about this chapter • coding techniques for finding and correcting errors • coding in the Chapter 2 ... source coding (情報源符号化) • work next to the information source • give a compact representation • coding in this chapter ... channelcoding (通信路符号化) • work next to the communication channel • give a representation which is robust against errors

  4. encode decode source coding encrypt decrypt protection (optional) encode decode channel coding two codings sender receiver source and receiver channel

  5. today’s class • motivation and overview • rephrase of the first day introduction • the models of communication channels • binary symmetric channel (BSC) • elementary components for linear codes • (even) parity check code • horizontal and vertical parity check code

  6. communication is erroneous real communication • no guarantee of “sent information = received information” • radio (wireless) communication • noise alters the signal waveform • optical disk medium (CD, DVD...) • dirt and scratch obstructs correct reading • the difference between sent and received information = error Errors may be reduced, but cannot be eliminated.

  7. error correction in daily life Three types of errors in our daily life: • correctable: “take a train for Ikomo”  “... Ikoma” • detectable: “He is from Naga city” “Nara” or “Naha” • undetectable: “the zip code is 6300102”  ??? What makes the difference? Nara 6300101 6300102 6300103 : Ikoma Ikomo Naga Naha names of places ... sparse (疎) in the set zipcodes ... densely (密) packed

  8. trick of error correction and detection For the error correction, we need to create sparseness artificially. (人工的に) • phonetic code: Alpha, Bravo, Charlie... あさひの「あ」,いろはの「い」... characters ... densely packed Alpha Bravo To create the sparseness, we... add redundant sequences, enlarge the space, and keep representations apart. A D G B E H C F I phonetic codes ... sparse

  9. 00 01 10 11 in a binary world assume that you want to send two binary bits • without redundancy... • Even if you receive “01”, you cannot say if... • “01” is the original data, or • result of the modification by errors. • with redundancy... 00 → 00000 01 → 01011 10 → 10101 11 → 11110 encode: to add the redundancy codewords: results of encoding code: the set of codewords 00000 01010 00010 00011 01011

  10. the principle of error correction the sender and the receiver agree the code • sender: choose one codewordc and send it • receiver: try to estimate the sent codewordc’ from a received data r which can contain errors assumption: Errors do not occur so frequently. with this assumption, error correction ≈ search of the codewordc’ nearest to r 00000 00010 01011

  11. without coding correct 0.92 = 0.81 incorrect 1 – 0.81 = 0.19 is it really good? assume that a symbol is delivered correctly with p=0.9 00 → 00000 01 → 01011 10 → 10101 11 → 11110 with coding correct = 0 or 1 error in the five bits 0.95+ 5C10.94・0.1 = 0.9072 incorrect 1 – 0.9072 = 0.0928 Good for this case, but not always...  construction of good codes is the main subject of this chapter

  12. channel input (sender) output (receiver) the model of channels communication channel • probabilistic model with one input and one output • the inputs and outputs are symbols (discrete channel) • an output symbol is generated for each input symbol • no lost, no delay • the output symbol is chosen probabilistically

  13. example of channels: 1 • binary symmetric channel (BSC, 二元対称通信路) • inputs = outputs = {0, 1} • input = 0  output = 0 with prob. 1 – p, 1 with prob. p. • input = 1  output = 0 with prob. p, 1 with prob. 1 – p. • p is said to be a bit error probability of the channel 1 – p BSC is ... memoryless(記憶がない) errors occur independently stable (定常) the probability p does not change 0 0 p input output p 1 1 1 – p

  14. example of channels: 2 • binary erasure channel (BEC, 二元消失通信路) • inputs = {0, 1}, outputs = {0, 1, X} • input = 0  output = 0 with prob. 1 – p, X with prob. p. • input = 1  output = 1 with prob. 1 – p, X with prob. p. • Xis a “place holder” of the erased symbol 1 – p 1 – p – q 0 0 0 0 p q p X X input output p q p 1 1 1 1 1 – p 1 – p – q (another variation)

  15. 011010101010010101011010110101010 011010101011011010101010110101010 example of channels: 3 • channels with memory • there is correlation between occurrences of errors • a channel with “burst errors” • unstable channels • the probabilistic behavior changes according to time • long-range radio communication, etc.

  16. channel coding: preliminary We will consider channel coding such that... • a binary sequence (vector) of length k is encoded into • a binary sequence (vector, codeword) of length n. • k < n, code C  Vn • a sequence is sometimes written as a tuple b1b2...bm = (b1, b2, ..., bm) • computation is binary, component-wise: 001 + 101 = 100 Vk Vn

  17. good code? • A code C is a subset of Vn • Vn contains 2n vectors • Ccontains 2kvectors  Given k and n, there are codes. • Which choice of C is good? • powerful enough for error correction • easy encoding and easy decoding Vn Vk 00 01 10 11 000 011 101 110 C the class of linear codes (線形符号)

  18. linear codes • easy encoding • (relatively) easy decoding (error detection/correction) • mathematics help constructing good codes • the performance evaluation is not too difficult Most codes used today are linear codes. We study some examples of linear codes (today), and learn the general definition of linear codes (next).

  19. 0000 0011 0101 0110 1001 1010 1100 1111 (even) parity code the encoding of an (even) parity code (偶パリティ符号): given a vector (a1, a2, ..., ak) ∈ Vk • compute p= a1 + a2 + ... + ak, and • let (a1, a2, ..., ak, p) be the codeword of(a1, a2, ..., ak) when k= 3... 000 001 010 011 100 101 110 111 p = 0 + 1 + 1 = 0 codeC

  20. 0000 0011 0101 0110 1001 1010 1100 1111 basic property of the parity code • code length: n = k + 1 • a codeword consists of... • the original data itself (information symbols) • added redundant symbol (parity symbol)  systematic code (組織符号) 1010 information symbols (bits) parity symbol (bit) codeC • a vector v of length n is codeword  vcontains even number of 1s.

  21. parity codes and errors • Even parity codes cannot correct errors. • Even parity codes can detect odd number of errors. #errors = #differences between the sent and received vectors 0101 received vector 0000 sent vector (codeword) we have two (2) errors #errors = even  the received vector contains even1s #errors = odd  the received vector contains odd 1s 0000 0011 0001 1001 0101

  22. horizontal and vertical parity check code • horizontal and vertical parity check code (2D parity code) • place information symbols in a rectangular form (長方形) • add parity symbols in the horizontal and vertical directions • reorder all symbols into a vector (codeword) p1 = a1 + a2 + a3 p2 = a4 + a5 + a6 p3 = a7 + a8 + a9 q1 = a1 + a4 + a7 q2 = a2 + a5 + a8 q3 = a3 + a6 + a9 r = a1 + a2 + ... + a9 k = 9, encode (a1, a2, ..., a9) a1 a2 a3 p1 a4 a5 a6 p2 a7 a8 a9 p3 q1 q2 q3 codeword: (a1, a2, ..., a9, p1, p2, p3, q1, q2, q3, r) r

  23. example of encoding • encode 011100101, and the codeword is 0111001010100101 0 1 1 0 1 0 0 1 1 0 1 0 0 1 01 0111001010100101 information symbols parity symbols • 2D codes are systematic codes • if k = ab, then the code length is n = ab + a + b + 1

  24. 2D codes and errors A 2D code can correct one-bit error in a codeword. • place the symbols in a received vector • count the numbers of “1” in each row/column • if there is no error... all rows and columns contain even 1s • if there is an error... • there is one row and one column with odd number of 1s • the intersecting point is affected by an error 0 1 1 even 010110101 (received) 0 0 1 odd 011110101 1 0 1 even odd even even correct the third bit

  25. two-bit errors • What happens if two-bits are affected by errors? real errors we cannot decide which has happened... two strange rows two strange columns We know something wrong, but cannot spot the errors.

  26. two-bit errors, another case real errors no strange row two strange columns We know something wrong, but cannot spot the errors.

  27. additional remark • Do we need the parity of parity? a1 a2 a3 p1 a4 a5 a6 p2 a7 a8 a9 p3 codeword: (a1, a2, ..., a9, p1, p2, p3, q1, q2, q3, r) q1 q2 q3 r Even if we don’t have r, • we can correct any one-bit error, but... • some two-bit errors cause a problem.

  28. 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 1 1 0 0 0 1 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 additional remark (cnt’d) • We expect that 2D codes detect all two-bit errors. • If we don’t use the parity of parity, then... codeword codeword 1-bit err. 1-bit err. to the nearest codeword to the nearest codeword some two-bit errors are not detected, instead, they are decoded to a wrong codeword.

  29. summary of today’s class • motivation and overview • rephrase of the first day introduction • the models of communication channels • binary symmetric channel (BSC) • elementary components for linear codes • (even) parity check code • horizontal and vertical parity check code

  30. excersise In the example of page 11, determine the range of the probability p under which the encoding results in better performance.

More Related