1 / 22

Basic Concepts of Encoding

Basic Concepts of Encoding. Codes and Error Correction. Encoding. Encoding is a transformation procedure operating on the input signal prior to its entry into the communication channel. This procedure adapts the input signal to the communication system and improves its efficiency . Encoding.

marcena
Download Presentation

Basic Concepts of Encoding

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Basic Concepts of Encoding Codes and Error Correction

  2. Encoding • Encoding is a transformation procedure operating on the input signal prior to its entry into the communication channel. This procedure adapts the input signal to the communication system and improves its efficiency .

  3. Encoding • In other words, encoding is a procedure for associating words constructed from a finite alphabet of a language (e.g. a natural language) with given words of another language (encoding language) in one-to-one manner. • Decoding is the inverse operation: restoration of words from the initial language.

  4. Codes • Let be the alphabet and its cardinality is . • Any finite sequence of the letters from this alphabet forms a word over it. Let S be a set of all possible words over A. • Some of may be meaningful, some of them may not be meaningful, but anyway we will use only some to encode the information.

  5. Codes • A subset , which is used for representation of the information in the communication system is commonly referred to as the code. • If all words from V have the same length n, then the code V is called the uniform code. • If words from V may have different length, then the code V is called the non-uniform code.

  6. Digital communications • Let us consider the digital communication channel. • Hence . • We will consider the uniform codes of the length n. Thus, the words over Z2 are n -dimensional binary vectors • form a set of “encoding” words.

  7. Distance between binary vectors • The distanceρ(the Hamming distance) between two n -dimensional binary vectors and is the number of components that differ from each other in terms of component-wise comparison. • To find the distance between two binary vectors, it is necessary to add them component-wise by mod 2 and then to count the number of “1s” in the vector-sum.

  8. Distance between binary vectors • For example,

  9. Distance between binary vectors • The distance ρmeets all metric’s axioms: • The Hamming’s norm of a binary vector is the number of “1s” in this vector.

  10. Errors • Replacement of one letter in a word by another one is commonly referred to as the error. • Let the recipient (the receiver) of the information knows the code. • “Detection of the error” means detection of the fact that the error has occurred without the exact detection of where. • “Correction of the error” means the complete restoration of a word, which was originally sent, but then was distorted.

  11. Errors • If the word was transmitted and some bits in X were inverted. As a result, the receiver receives . • If , then the error can not be detected and corrected without analysis of the sense of a whole message. • If , but , then the error can be detected and corrected upon certain conditions.

  12. Maximum likelihood decoding • Let X was transmitted, Y was received and • To correct the error (errors) and to decode the corresponding word we have to find • This method is called the maximum likelihood decoding

  13. Minimum encoding distance • is called a minimum encoding distance of the code V . • This means that • In other words, the minimum encoding distance equals to the minimum distance between the encoding vectors • if the distance between the encoding vector and another vector is less then d then

  14. Minimum encoding distance • For example, let n=3. Then S={000,001,010,011,000,101,110,111}. Let V={000,111}. Then d=3. Indeed, If

  15. Criterion of Error Detection • Theorem. The uniform code V detects at most t errors if, and only if d=t+1. • Proof. Let d=t+1. Let XY and . This means that and t errors can be detected. Let the code detects t errors. Then . Otherwise, if and , this contradicts to the ability to detect t errors.

  16. Example of Error Detection • For example, let n=3. Then S={000,001,010,011,000,101,110,111}. Let V={000,111}. Then d=3 and we can detect (not correct, just detect!!!) 2 errors. • Indeed, if any 1 or 2 of 3 bits in any encoding vector is (are) inverted, we obtain a vector, which does not belong to V. If 3 bits are inverted, we obtain another encoding vector and can not detect the errors.

  17. Criterion of Error Correction • Theorem. The uniform code V can correct at most t errors if, and only if d=2t+1. • Proof. Necessity. The code corrects at most t errors. We have to prove that Suppose that this is not true, which means: Then, if was transmitted, was received and , we are unable to decode, because X and Y are equidistant to Z. This contradicts to the initial condition that the code corrects up to t errors.

  18. Criterion of Error Correction • If was transmitted, was received and ,which means that Y was transmitted. This contradicts to the initial condition that the code corrects up to t errors and therefore it can not be that and this means that and therefore d=2t+1.

  19. Criterion of Error Correction • Proof. Sufficiency. Let the minimum encoding distance is d=2t+1. We have to prove that the code can correct up to t errors. Let was transmitted and was received. Suppose that . On the other hand, according to the metric axioms: If exactly t errors occurred, than X will be decoded. If less then t errors occurred, then, a fortiori, X will be decoded.

  20. Example of Error Correction • For example, let n=3. Then S={000,001,010,011,000,101,110,111}. Let V={000,111}. Then d=3=2*1+1, and we can correct 1 error. • Indeed, if 1 of 3 bits in any encoding vector is inverted, we obtain a vector, which does not belong to V, and we always can determine a unique vector from V, whose distance to the distorted vector is exactly 1.

  21. Example of Error Correction • S={000,001,010,011,000,101,110,111}. V={000,111}. X1=(000) X2=(111) • Let X1=(000) was transmitted, Y=(100) was received. and we definitely decode X1.

  22. Example of Error Correction • S={000,001,010,011,000,101,110,111}. V={000,111}. X1=(000) X2=(111) • If 2 of 3 bits in any encoding vector are inverted, we also obtain a vector, which does not belong to V . We can detect that 2 errors occurred, but we can not correct them, because there will be more than one equidistant vector in V, whose distance to the distorted vector is 2. • Let X1=(000) was transmitted, Y=(101) was received. and there is no way to correct the errors because the decoding procedure can not be ambiguous.

More Related