1 / 35

In the Name of God Computer Networks Chapter 3: The Data Link Layer (part2)

In the Name of God Computer Networks Chapter 3: The Data Link Layer (part2). Dr. Shahriar Bijani Shahed University March 2014. References: A . S. Tanenbaum and D. J. Wetherall , Computer Networks (5th Edition ), Pearson Education, the book slides, 2011.

ping
Download Presentation

In the Name of God Computer Networks Chapter 3: The Data Link Layer (part2)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. In the Name of GodComputer NetworksChapter 3: The Data Link Layer (part2) Dr. ShahriarBijani Shahed University March 2014

  2. References: • A. S. Tanenbaum and D. J. Wetherall, Computer Networks (5th Edition), Pearson Education, the book slides, 2011. • Chapter 6, Data Communications and Computer Networks: A Business User's Approach, 6th Edition • B. A. Forouzan, Data Communications and Networking, 5th Edition, Behrouz A. Forouzan, McGraw Hill, lecture slides, 2012.

  3. Error Detection and Correction • Noise is always present • White Noise (thermal or Gaussian noise) • Impulse Noise

  4. Error Detection and Correction • Two basic strategies to deal with errors: • Include enough redundant information to enable the receiver to deduce the original data: Error correcting codes. • Include only enough redundancy to allow the receiver to deduce that an error has occurred (but not which error): Error detecting codes.

  5. Error Detection & Correction Code • Hamming codes. • Binary convolutional codes. • Reed-Solomon codes. • Low-Density Parity Check codes.

  6. Error Detection & Correction Code • All the codes presented in the previous slide add redundancy to the sent information. • A frame consists of • m data bits (message) and • r redundant bits (check). • Block code - the r check bits are computed solely as function of the m data bits with which they are associated. the m bits were looked up in a large table to find their corresponding r check bits. • Systemic code – the m data bits are send directly along with the check bits (rather than being encoded). • Linear code – the r check bits are computed as a linear function of the m data bits. XORor modulo 2addition is a popular choice.

  7. Error Detection & Correction Code • n – total length of a block (i.e., n = m + r) • (n, m) code • n –bit codeword containing n bits. • m/n – code rate (range ½ for noisy channel and close to 1 for high-quality channel).

  8. Error Detection & Correction Code Example • Transmitted: 10001001 • Received: 10110001 XOR operation gives number of bits that are different. • XOR: 00111000 • Hamming Distance: the number of bit positions in which two codewords differ. • It shows that two codes are d distance apart = derrors to convert one into the other. • Minimum Hamming distance: the smallest Hamming distance between all possible pairs in a set of words.

  9. Error Detection & Correction Code • All 2m possible data messages are legal, but due to the way the check bits are computers not all 2n possible code words are used. • Only small fraction of 2m/2n=1/2r of possible messages will be legal codewords. • The error-detecting and error-correcting codes of the block code depend on this Hamming distance. • To reliably detectd error, we need a distance d+1code. • To correctd'error: we need a distance 2d' +1code.

  10. Error Detection & Correction Code Example: • 4 valid codes: • 0000000000 • 0000011111 • 1111100000 • 1111111111 • The Minimal Distance is 5 => can correct 2 errors and detect 4 errors. • 0000000111 => single or double – bit error. Hence the receiving end must assume the original transmission was 0000011111. • 0000000000had triple error => 0000000111 received The error can only be detected.

  11. Error Detection & Correction Code • Error correction requires evaluation of each candidate codeword which may be time consuming search. • Through design this search time can be minimized. • In theory if n = m + r, a lower limit on the number of check bits needed to correct single errors: • (m + r + 1) ≤ 2r

  12. 1. The Hamming Code • Create the codeword: • Check bits (parity bits): All bit positions that are powers of 2: (p1, p2, p4, p8, p16, …). • The rest of the bit positions are filled with m data bits: (m3, m5, m6, m7, m9, m10, m11, m12, m13,…) • Each parity bit calculates the parity for some of the bits in the code word. The position of the parity bit determines the sequence of bits that it alternately checks and skips. Position 1: check 1 bit, skip 1 bit, check 1 bit, skip 1 bit, etc. (1,3,5,7,9,11,13,15,...)Position 2: check 2 bits, skip 2 bits, check 2 bits, skip 2 bits, etc. (2,3,6,7,10,11,14,15,...)Position 4: check 4 bits, skip 4 bits, check 4 bits, skip 4 bits, etc. (4,5,6,7, 12,13,14,15, 20,21,22,23,...)Position 8: check 8 bits, skip 8 bits, check 8 bits, skip 8 bits, etc. (8-15, 24-31, 40-47,...)etc. • Set a parity bit to 1if the total number of ones in the positions it checks is odd. Set a parity bit to 0 if the total number of ones in the positions it checks is even.

  13. Hamming Code: Example • m = 4 data bits (D) and r= 4 check bits, n = 7-bit codeword: • This would be called a (7,4) code. • The 3 bits to be added are 3 EVEN Parity bits (P), where the parity of each is computed on different subsets of the message bits as shown below:

  14. Hamming Code: Example • For example, the message 1101 would be sent as 1100110, since:

  15. Hamming Code: Parity Circles • When these 7 bits are entered into the parity circles, it can be confirmed that the choice of these 3 parity bits ensures that the parity within each circle is EVEN:

  16. Hamming Code: Example • If an error occurs in any of the seven bits, it will affect different combinations of the three parity bits depending on the bit position. • E.g. a single bit error occurs: transmitted message received message 1 1 0 0 1 1 0 1 1 1 0 1 1 0 BIT No: 7 6 5 4 3 2 1 BIT No.: 7 6 5 4 3 2 1 The above error (in bit 5) can be corrected by examining which of the three parity bits was affected by the bad bit:

  17. Hamming Code: Error Detection received message: • The bad parity bits labeled 101 point directly to the bad bit since 101 binary equals 5. • Examination of the 'parity circles' confirms that any single bit error could be corrected in this way.

  18. Hamming Code: Error Detection Example of an (11, 7) Hamming code correcting a single-bit error.

  19. Hamming Code: Summary • The value of the Hamming code: • Detection of 2 bit errors (assuming no correction is attempted); • Correction of single bit errors; • Cost of 3 bits added to a 4-bit message. • The ability to correct single bit errors comes at a cost which is less than sending the entire message twice. (Recall that simply sending a message twice accomplishes no error correction.)

  20. 2. Error Detection & Correction:Convolutional Codes • Not a block code • There is no natural message size or encoding boundary as in a block code. • The output depends on the current and previous input bits. Encoder has memory. • Constraint length of the code: the number of previous bits on which the output depends. • They are deployed as part of the • GSM mobile phone system • Satellite Communications, and • 802.11 (see example in the previous slide).

  21. Convolutional Encoders • A convolutional encoder is a linear system. • A binary convolutional encoder can be represented as a shift register. • The outputs of the encoder: modulo 2 sums of the values in the certain register's cells. • The input to the encoder is either the unencoded sequence (for non-recursive codes) or the unencoded sequence added with the values of some register's cells (for recursive codes). • Convolutional codes can be systematic or non-systematic. • Systematic codes: an unencoded sequence is a part of the outputsequence. Almost always recursive • Non-recursive codes are almost always non-systematic.

  22. Convolutional Encoders • A combination of register's cells that forms one of the output streams (or that is added with the input stream for recursive codes) is defined by a polynomial. • m: the maximum degree of the polynomials forming a code, then K =m+1 is a constraint length of the code. • E.g. the polynomials of Figure 1: g1(z)=1+z+z2+z3+z6 g2(z)=1+z2+z3+z5+z6 Figure 1: A standard NASA convolutional encoder with polynomials (171,133).

  23. Convolutional Encoders: Example 1 • Example: g1(z)=1+z+z2+z3+z6 g2(z)=1+z2+z3+z5+z6 • A code rate is an inverse number of output polynomials. • For the sake of clarity, here we restrict ourselves to the codes with rate R=1/2. Decoding procedure for other codes is similar. • Encoder polynomials are usually denoted in the octal notation. For the above example:“1111001” = 171 and “1011011” = 133. • The constraint length of this code is 7.

  24. Convolutional Encoder: Example 2 An example of a recursive convolutional encoder is on the Figure 2. Figure 2. A recursive convolutional encoder.

  25. Trellis Diagram • A convolutional encoder is often seen as a finite state machine. Each state corresponds to some value of the encoder's register. Given the input bit value, from a certain state the encoder can move to two other states. • A solid line= input 0, a dotted line = input 1 (the rightmost bit is the newest one). • Any valid sequence from the encoder's output can be represented as a path on the trellis diagram. One of the possible paths is denoted as red (as an example). Figure 3. A trellis diagram corresponding to the encoder on the Figure 2.

  26. Trellis Diagram • Each state transition on the diagram corresponds to a pair of output bits. • There are only 2 allowed transitions for every state (2 allowed pairs of output bits, and the 2 other pairs are forbidden) • If an error occurs, it is very likely that the receiver will get a set of forbidden pairs, which don't create a path on the trellis diagram. • So, the task of the decoder is to find a path on the trellis diagram which is the closest match to the received sequence.

  27. Viterbi Algorithm • A convolutional code is decoded by finding the sequence of input bits that is most likely to have produced the observed sequence of output bits (which includes any errors). • Viterbi algorithm reconstructs the maximum-likelihood path for a given input sequence. • The input sequence requiring the fewest errors at the end is the most likely message.

  28. 3. Error Detection & Correction: Reed-Solomon • Like Hamming codes, Reed-Solomon codes are linear block codes, and they are often systematic too. • Unlike Hamming codes, which operate on individual bits, Reed-Solomon codes operate on m bit symbols. • based on the fact that every n degree polynomial is uniquely determined by n + 1 points. • Example • ax+ b is determined by two points. Extra points on the same line are redundant, which is helpful for error correction. • 2 data points represent a line. we send those two data points plus two check points on the same line. • If one of the points is received in error, we can still recover the data points by fitting a line to the received points. 3 points will lie on the line, and 1 error point will not. • By finding the line we have corrected the error

  29. Error-Detecting Codes Linear, systematic block codes • Parity. • Checksums. • Cyclic Redundancy Checks (CRCs).

  30. 1. Parity Bits • Idea: add extra bits to keep the number of 1s even • Example: 7-bit ASCII characters + 1 parity bit • Detects 1-bit errors and some 2-bit errors • Not reliable against bursty errors 0101001 1 1101001 0 1011110 1 0001110 1 0110100 1 10 1

  31. Two Dimensional Parity • Can detect all 1-, 2-, and 3-bit errors, some 4-bit errors • 14% overhead 0101001 1101001 1011110 0001110 0110100 1011111 1 0 1 1 1 0 0 1111011 Parity bit for the parity byte Parity bit for each row Parity bit for each column

  32. 2. Checksums • Idea: • Add up the bytes in the data • Include the sum in the frame • Use ones-complement arithmetic • Lower overhead than parity: 16 bits per frame • But, not resilient to errors • Why? • Used in UDP, TCP, and IP START Data Checksum END 0 1 0101001 + 1101001= 10010010

  33. 3. Cyclic Redundancy Check (CRC) • Uses field theory to compute a semi-unique value for a given message • In a cyclic code, rotating a codeword always results in another codeword • Example: • Much better performance than previous approaches • Fixed size overhead per frame (usually 32-bits) • Quick to implement in hardware • Only 1 in 232 chance of missing an error with 32-bit CRC

  34. CRC Encoder/Decoder

  35. Cyclic Redundancy Check (CRC) Example calculation of the CRC

More Related