Data link layer cse 434 598 spring 2001
1 / 60

Data Link Layer CSE 434/598 Spring 2001 - PowerPoint PPT Presentation

  • Uploaded on

Data Link Layer CSE 434/598 Spring 2001. Data Link Layer. Bridges between the Network Layer, and the Physical Layer Refer Figure 3-1, and the OSI Stack. Virtual connection between the two sides’ data layers... Physical layer provides the capability of bit-transmission/reception

I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
Download Presentation

PowerPoint Slideshow about ' Data Link Layer CSE 434/598 Spring 2001' - larissa-sherman

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
Data link layer cse 434 598 spring 2001

Data Link LayerCSE 434/598Spring 2001

Data link layer
Data Link Layer

  • Bridges between the Network Layer, and the Physical Layer

    • Refer Figure 3-1, and the OSI Stack. Virtual connection between the two sides’ data layers...

    • Physical layer provides the capability of bit-transmission/reception

      • Data Link layer will simply use them to transmit data

      • Why is this a big deal ? Source node simply puts bit out on the channel, and the destination receives them ...

      • Issues: Objectives such as reliability, packetizing, enhanced efficiencyPhysical media features such as noise, bit errors, finite data rate, non-zero propagation delay, ...

  • Functions of the Data Link layer

    • Well-defined service interface to the Network layers

    • Grouping of Physical layer’s bits into frames, i.e., packetizing

    • Accommodate transmission errors, and flow control (between Source and Destination)

Connection services
Connection Services

  • Provides a logical connection over the physical media

  • Three types of connections:

    • Unacknowledged connectionless

    • Acknowledged connectionless

    • Acknowledged connection-oriented

      • (Question: why isn’t the fourth combination, “Unacknowledged connection-oriented”, is considered ?)

  • Unacknowledged connectionless

    • Independent frames sent to the destination

    • No acknowledgment (neither per frame, nor per the entire message)

      • Cannot recover from lost frames, will not even know about frame loss !!

    • No connection setup (prior to message communication), no connection release (beyond the message delivery)

Connection services contd
Connection Services (contd...)

  • Unacknowledged connectionless

    • Source node simply emits the series of frames...(Without any care whether they reach or not)

    • Acceptable for media with very low error rate (e.g., Optical)

      • Also for applications where conducting Ackn is infeasible (due to delay)

    • Reliability can be augmented at higher OSI layers, e.g., Transport layer

  • Where must we use acknowledgement-services ?

    • Acknowledgement-mechanisms are needed for reliable data transfer

    • Ackn is typically provided at an event with moderate probability of failure

      • Neither too high, then the service will simply keep on Ackn-ing

      • Nor too low, then the service will incur the (unnecessary) overhead of Ackn

    • For noisy media: Ackn at Data Link layer (as well as in the higher layers)

    • For more reliable media: Start ackn-mechanisms at Transport layer...

Acknowledged services
Acknowledged Services

  • Acknowledged Connectionless

    • No logical connection established, prior to data transmission

    • However, each frame is acknowledged individually

      • If a frame does not arrive within a time limit ==> re-transmit(Question: what if a frame is received multiple times ?)

    • Good for wireless media, a.k.a.. noise

      • The burden of Acknowledgement is acceptable, as without a minimum level of reliability from this layer, the upper layers will simply keep re-transmitting the same message for a very long (if not, infinite) time

      • Work out a typical example...

    • Not good for more reliable media

      • Too much overhead, since a minimum level of reliability is already built-in

  • Acknowledgment - per how many bits of information ?

    • Too few ==> do at Physical media

    • Modest ==> do at Data Link Layer; Large ==> at the Transport Layer...

Acknowledged connection oriented
Acknowledged Connection-Oriented

  • Connection establishment

    • Source and destination nodes setup a logical channel, i.e., a form of virtual path between them

    • Before any data of the message is transmitted

    • Acquire the buffers, path variables and other resources necessary for the path

  • Data transmission

    • Frames are acknowledged

    • Frames are numbered, and guaranteed to be received once only

    • Provides a reliable bit-stream to the Network Layer

  • Connection release

    • Free up the variables, buffer space etc. for the connection

  • Complex message delivery services (e.g., multihop routing) ==>Connection-oriented services with Acknowledgments are preferred


  • Basic element for provision of Acknowledgement

    • Entire message - too long to wait for an Acknowledgement

    • Requires the message to be fragmented ==> packet or frame concept

    • There a whole bunch of other reasons for creating frames...(e.g., self-routing, connectionless providers, alternate path finders, ...)

  • How to create frames ?

    • Break the message bit stream into fragments

      • Separated by time pauses ?

        • Too unreliable, or even demanding on the source node

      • Length count

      • Start/End character sequence

      • Start/End bit sequence

      • Physical layer coding

Framing contd
Framing (contd...)

  • Framing by character count

    • The length of the frame, as an integer, is immediately before the number of frames (refer Figure 3-3a)

    • Highly scalable in frame sizes

    • However, vulnerable to errors

      • If the length data, i.e., the integer denoting the #frames, is corrupted

      • Not only the current frame is incorrectly assimilated

      • But the subsequent frames also will be confused (refer Figure 3-3b)

  • Framing by special character sequence (Start, as well as Stop)

    • Specially coded Start character sequence, and Stop character sequence

    • Not as vulnerable to error as above (error cannot affect non-local frames)

    • Difficulty: if parts of the bit-stream coincides with the special Start/Stop codes

      • Solution: if so, repeat the special code once more (called Character Stuffing)

      • Question: give an analogy where else you might have seen similar stuffing !!

Framing contd1
Framing (contd...)

  • Character Framing: difficulty with binary data, e.g., any data other than 8-bit ASCII format

  • Solution: Bit-level Framing

    • Start/Stop bit pattern: A special 6-bit code (e.g., “01111110”)

    • If the data bit-stream happens to include the same bit-sequence:

      • Bit-stuffing: insert a “0” after the 5-th consecutive “1”

      • Analogous to character stuffing, except at the bit-level (Figure 3-5)

  • Media coding based frames

    • Available for non-binary media, where codes other than 1/0 are available(eqv: use 2 bits for transmission of a 1-bit data)

    • Example: 11 for bit = 1, and 00 for bit = 0

      • Then, use 10 for Start bit, and 01 for Stop bit

      • Useful for added reliability in transmission...

Error control
Error Control

  • Goal: All the frames must

    • Reach the destination

    • Reach once and only once

    • Reach in order (otherwise, may need a high-speed sorting)

  • Solution approaches:

    • Feedback, usually Acknowledgement (both positive and negative)

      • If a frame is not received, sender will re-transmit

    • Time-out, to indicate a completely vanished frame and/or Ackn

      • Sender waits a max-time, and if neither +ve nor -ve Ackn arrives ==> re-transmit the frame

      • Accommodates total bit-drop situations...

    • Frame sequence numbers

      • Due to re-transmission, and/or time-out ==> a frame may arrive the destination more than once. Frame numbers can allow discarding duplicate/stale frames...

Flow control
Flow Control

  • Issue: speed mismatch between sender and receiver

  • Transient mismatch

    • Can accommodate using buffers

    • However, if continued for longer time, buffer will overflow

  • Consistent mismatch

    • Flow Control: Receiver must send rate-control signals to the sender

    • Protocol contains well-defined rules regarding when a sender may transmit the next frame

  • Question: discuss flow control in

    • Connectionless services

    • With or without Ackn transmissions

    • Connection-Oriented services

Coupling model for the osi layers
Coupling Model for the OSI Layers

  • Desired features:

    • Security

    • Real-time

    • Fault-tolerance, reliability

  • Question:

    • Which layers of the OSI stack should accommodate them

      • Explicitly, i.e., as a ‘must’

      • Optional, i.e., as a “may be”, or “desired with some parameters”...

      • Not necessary, i.e., doing could be harmful type

Error detection and correction
Error Detection and Correction

  • Goal: from a continued bit-stream of transmission, find out (on-the-fly) whether some of the bits have errors (e.g., a 0 got changed to a 1, or vice versa)

    • Size of the bit-stream

    • Potential error bit position (i.e., which one among all the bits is wrong)

    • How many errorneous bits ?

    • For each error, how much is the error ? (difference between current and actual)

      • Being binary, we are fortunate here. An errorneous-0 ==> 1, always

      • For r-ary systems, this is another uncertainty to deal with

  • Assumption

    • Bit errors occur independently, and randomly across the bit-stream

Single error vs bursty errors
Single Error vs. Bursty Errors

  • Burst error: The erroneous bits are correlated in bit-positions

    • Reflect the physical reasons behind failures (e.g., fault occurs due to a noise level, which affects a group of bits)

    • Is it any easier to detect (than single bit errors) ? Likely, yes.

    • Is it any easier to correct (than single bit errors) ? Certainly, not.

  • Burst error correction

    • Better to trash the entire bit stream, request the source to re-send

    • Too expensive to correct

  • Individual bit-error correction

    • Instead of re-send request to the source, correct upon the bit-stream

    • Approach: provide few extra error correcting bits

Error detection vs correction
Error Detection vs. Correction ?

  • Detection is preferred when

    • Error probability is too low (making steady overhead of an error correcting code unacceptable)

    • Bursty error takes place; error bits are localized (grouped)

  • Correction is preferred when

    • Re-transmission requests are not acceptable, due to reasons such as

      • Speed of information delivery

      • Simplex nature of the link, i.e., cannot send back a re-transmit request

    • Error bits are not localized, or clustered

      • A small, and evenly spread out number of errors for every block of data

      • Basis of error-correcting codes is retained (refer following discussion)

Idea of error detecting and correcting code
Idea of Error-Detecting and Correcting Code

  • Message of m bits, add r redundant or check bits

    • Total message is of n bits, with n = m + r

    • Information in the message: 1 among 2^m possibilities

    • Information available in the transmitted bit-stream: 1 in 2^n

    • Goal: Map this (1 in 2^m) information onto the (1 in 2^n) transmitted bit-stream, with large error-locator distances

  • Error locator distance

    • Distance between the corresponding transmitted bit-streams for consecutive message bit streams

    • If distance is d+1, then we can detect upto d-bit failure

    • If distance is 2d+1, then we can correct upto d-bit failure

Error locator distance
Error Locator Distance

A valid message, m-bit data

A particular transmission, with error

  • Not all the transmitted bit-streams are valid message streams

  • Error makes the transmitted bit-stream deviate from the valid positions

  • Correction: simply choose the nearest valid bit stream

  • Assumption

    • Error < threshold; else jumping to other slots likely

Correction distance

Correction: n-bit adjacency, not linear


  • Single bit parity

    • Every alternate bit-pattern is a valid data

      • “Alternate” according to the Hamming distance, not integer count sequence

    • Example with even parity

      • 2-bit message data: (00, 01, 11, 10) become (000, 011, 110, 101) with parity

      • 3-bit transmitted data sequence

        • 000, 001, 011, 010, 110, 111, 101, 100

        • An error locator distance = 2

      • Try odd parity: (00, 01, 11, 10) become (001, 010, 111, 100)

        • 3-bit transmitted data sequence: 000, 001, 011, 010, 110, 111, 101, 100

    • Hence, single bit parity can do upto 1-bit error detection, and no correction

  • Background (assumed): Hamming distance and Hamming codes

1 bit correcting codes
1-bit Correcting Codes

  • Message = m bits, total transmission = m+r = n bits

    • Total 2^m valid message bits

    • Desired error locator distance - at least 2

      • Since, when 1-bit occurs, the code will not equate (wrongly !) to another valid message stream’s code

    • Thus, every valid message bit pattern must leave out all its Hamming distance-1 codes unused

      • Each of the 2^m codes will require n Hamming distance-1 codes to be free

      • Each one of the 2^m codes occupy (n+1) codes on the code space

    • Hence, (n+1). 2^m <= 2^n, Or, (m+r+1) <= 2^r

    • For a given m, we can find a lower bound on r

      • m = 8, r >= 4

  • Question: extend this logic for k-bit correcting codes...

Hamming 1 bit correcting code
Hamming 1-bit Correcting Code

  • n bits are positioned as 1, 2, ..., n

  • Bits that are at powers of 2 positions, i.e., 1, 2, 4, 8, .. are check bits (i.e., r) --- the other bit positions are data bits (i.e., m)

  • Bit-value at check bit position p (p = 2^i)

    • Will consider the bit-values (NB: data bits, eventually) from all bit positions j, where the binary expression of j include a 2^i

    • Implement an odd (or, even) parity over all such positions j

    • Example: Check bit at position 2 will address all data bits which has a ‘1’ at the 2^1 bit position of their bit-id

    • Example: Data bit at position 11 (=1+2+8) will contribute the check bits at positions 1, 2 and 8

  • Continue...

Example 1 bit correcting code
Example (1-bit Correcting Code)

  • 7 bit codes, i.e., m=7

    • Hence, r = 4

    • NB: (m+r+1) <= 2^r

  • Total 11 bit code

    • 1, 2, 4 and 8-th positions are Check Bits

    • 3, 5, 6, 7, 9, 10, 11-th bit positions are data bits

    • Check bit at position 2^i implements an even parity over all the bit-positions that require i-th bit = 1 in their respective ids


Data: 1101101























  • Question: explain why this approach works ?

    • Check bit at 2^i position, caring about data bits from positions who require bit-i = 1 in their id ---- whats the catch ? Why does it work ?

    • Construct the mathematics, i.e., the bit-arithmetic behind this

  • Special Project 2

    • Due in 2 weeks, turn in an explanation of the above

    • Assumption: no consultation to the original paper(the answer/explanation should be your own !)

    • Credit: 2 absolute bonus points

      • Thus, if you do this project (correctly !), and do everything else, you may score upto 102 (out of 100)

    • Can be useful for borderline grades...

Bursty error
Bursty Error

  • Significantly long bit-stream gets corrupted

  • Naive approach

    • Devise an correcting code to correct upto 100’s or 1000’s of errorneous bits

    • Too much bit overhead. It will be cheaper to perhaps re-transmit

  • A better approach

    • 2D structuring of the bit-stream

    • Bits that are temporal-adjacents get distributed to become far apart in transmission, i.e., not transmission-adjacents

    • Error-correcting codes are designed per temporal adjacency

    • A bursty error in transmission will (most likely) affect a small number of bits within each temporal code-block

    • Concern: delay of the individual block of bits

Bursty error handling using 2d data ordering
Bursty Error Handling using 2D Data Ordering

  • Design question: how to associate temporal adjacencies to transmission adjacencies ?

  • Objective: Given a timing deadline, and maximum bursty error length, how to arrive at the 2D data structure ?

adjacency per Correcting code

adjacency per Transmission sequence

Correction vs re transmission
Correction vs. re-Transmission

  • Timeliness

    • Re-transmission likely to be slower

    • Correcting codes may become slower too, if too many overhead bits are involved

  • Traffic overhead

    • Re-transmission imposes a conditional overhead

    • Correcting codes impose a constant overhead

  • Types of errors handled

    • Re-transmission can handle bursty errors

    • Correcting codes may be able to handle bursty errors, but typically designed for small number of isolated errors

Error detecting code
Error Detecting Code

  • Correcting code is a must in some designs

    • e.g., simplex channel (thus, no room for Ackn & re-transmission)

  • Otherwise, typical design choice: error detection and re-transmission

    • More effective for fewer bit errors, i.e., low probability of failure

    • Traditional error detecting approaches

      • Parity - single bit (even, or odd)

        • Essentially, implements a (#1’s in the bit stream) mod 2 = 0 policy

      • Multiple (r) bit parity

        • Implements (#1’s in the bit stream) mod (2^r) = 0

      • Checksum

        • Row-column organization of the data bits

        • Checksum implements parity within each row

Other error detection approaches
Other Error Detection Approaches

  • Polynomial codes (CRC, cyclic redundancy code checker )

    • Sender and receiver must agree upon a degree-r generator polynomial, G(x)

    • Typical G(x): x^16 + x^12 + x^5 + 1; or, x^16 + x^ 15 + x^2 + 1; ...

      • Corresponding bit patterns: co-efficient bit-string for the Polynomial

    • Checksum computation (message = M(x))

      • Attach r-bits, all equal to ‘0’, at the lsb end of M(x)

      • Divide [ M (x) 000...0 ] by G(x); use modulo-2 division

      • Subtract the remainder from [ M (x) 000...0 ] ===> this is the Check-summed frame to send

    • Receiver end

      • Divide (modulo-2) the received frame by G(x). If remainder != 0, its an error.

Types of errors detected
Types of Errors Detected

  • Single bit errors

    • Remainder (after dividing by G(x)) = x^i, with i-th bit as error

    • If G(x) contains two or more terms, it will never divide x^i

    • Hence, all single bit errors will be detected

  • Two, independent, bit errors

    • Here, E(x) = x^i + x^j = x^j ( x^{i-j} + 1 )

      • G(x) will not divide (x^i + x^j), if a sufficient condition holds that

        • G(x) does not divide (x^k +1), for all k values from 1 to max. frame length

        • Example: G(x) = x^15 + x^14 + 1 does not divide any (x^k + 1) for k <= 32,768

  • Odd number of bit errors (e.g., 3 errors, 5 errors, 7 errors, ...)

    • Known result: A polynomial with odd #terms, is not divisible by (x+1)

    • Thus, select G(x) such that (x+1) is a factor of it

    • Then, G(x) will never divide a polynomial with odd #terms

Features and why does it work
Features and Why does it work ?

  • Features

    • Detects all single and double errors, all errors with an odd #bits, most burst errors, ...

  • Principle of division and checking remainder = 0 or not

  • Analogy: the multi-dimensional circular dial approach

    • Consecutive “correct” positions in the dial refers to the codes which will yield a remainder = 0, when divided by G(x)

    • NB: a special case is with x=2, i.e., G(x) = numeric value for the bit-string of G(x)

    • Division is really one (clean) way to select 1-correct, and (q-1)-incorrect positions; if we are dividing by q

Unrestricted simplex protocol
Unrestricted Simplex Protocol

  • Data transmitted in one direction only

  • Both the transmitting and receiving Network Layers are always ready

    • Assume large enough buffer space

  • Sender’s mode

    • Infinite loop, pumping out data onto the link as fast as possible

      • Fetch a packet from the upper layer (i.e., sender user)

      • Construct a frame, putting the data and a header

      • Transmit the frame onto the link

    • No ackn involved

    • Frames are not assigned to any sequence numbers

Receiver of unrestricted simplex channel
Receiver of Unrestricted Simplex Channel

  • Receiver’s mode

    • Wait for the arrival of an (undamaged) frame

    • Receive the frame, remove header and extract the data bits

    • Propagate the data bits to the receiver user

  • Problems...

    • Sender’s rate can become much faster than what receiver can receive in time

      • Eqv: Line is as fast as the sender node can pour data into it

    • No provision for acknowledgement and error-handling

Receive Data

Data ready




Acknowledgement per frame
Acknowledgement per Frame

  • Synchronous transmission

    • If the transmitter, receiver and the channel could agree on a common speed

    • Not applicable with widely varying delays @ the channel, ...

    • Tuning to the worst case speeds

      • Utilization falls off, and is a conservative estimate only

    • Other solution approach: provide “ready-for-next” type of feedback to sender

  • Control the sender, not to send the next frame if

    • the receiver is not ready yet

    • the transmission link is not free yet, ...

  • Receiver’s feedback to the Sender

    • After having recd the current frame, send a reverse frame (dummy, eqv. Ack)

Stop and go protocol
Stop and Go Protocol

  • Sender: must wait for an Ack before transmitting the next frame

  • Receiver: must send back a dummy-frame, as an Ackn, to the sender after receiving a frame

  • Features:

    • Rate control achieved

    • Half-duplex or better channel required

    • Overhead of the reverse frame’s transmission into each forward packet’s journey


Data ready



Receive Data


Protocol for noisy channels
Protocol for Noisy Channels

  • Transmission line can be noisy with partial (or, total) failure

    • Frames can get damaged, or lost completely

    • Employ error detection approaches ==> find out if an error took place

  • What if ?

    • A negative Ackn. is sent when a failure occurs

      • The sender could re-transmit

      • Keep repeating until the packet is successfully transmitted

    • Ackn is sent only for successful (i.e., error free) transmissions

      • Faulty, or erroneous transmissions would not receive an Ackn-back

      • Implement a time-out, and if no Ackn is received ==> re-transmit

  • Problem: does not prepare for missing Ackn signals

Sequencing through frames
Sequencing through Frames

  • Error in Acknowledgement signal

    • Data (i.e., forward) transmission is error free

    • Receiver (after computing Checksum etc. and finding no error) sends Ackn

    • Reverse channel is noisy

      • the Ackn signal is corrupted (but reach the sender)

      • Or, the Ackn signal is completely lost, i.e., dropped

    • Sender would re-transmit the frame (anticipating forward transmission error)

    • Receiver has multiple copies of the same frame ==> consistency problem

  • Frame sequence number

    • Assign a serial id# to each frame

    • Receiver can check, if multiple copies of the same frame arrived

      • In fact, a time-stamp along with frame id# can help in resolving staleness also

Range of frame id
Range of Frame id#

  • How many bits for the Frame id#

    • As few as possible

    • What is minimum ?

      • Stop and Go model, implemented for each frame

        • Distinguish between a frame, and its immediate successor

        • If a frame #m is damaged, then Ackn-for-m would be missing

          • Sender will re-transmit frame #m

          • Distinction required between “m+1” and ‘m’

        • Thus, a 1-bit sequence number could be adequate

      • Provide an window of tolerance between transmision and reception

        • Not “stop and go” per frame level

        • “Go” until Ackn for the last W frames are not received

        • Frame id# should have sufficient number of bits to reflect the number W


  • Full Duplex Communication environment

    • Forward

      • Step 1-F: Node A (sender) sends data to node B (receiver)

      • Step 2-F: Node B sends Ack back to Node A

      • Node A implements a time-out on Ack signals, and re-transmits (using frame id#’s)

    • Reverse

      • Step 1-R: Node B (sender) sends data to node A (receiver)

      • Step 2-R: Node A sends Ack back to Node B

      • Node B implements a time-out on Ack signals, and re-transmits (using frame id#’s)

  • Piggybacking: Combine steps (2-F, and 1-R), likewise combine steps (1-F, and 2-R)

Features of piggybacking
Features of Piggybacking

  • Advantages

    • Reduction in bandwidth

    • Ack signals get (almost always) a free ride on the reverse transmitting frames

  • Disadvantages

    • Delay in the Ack signals

      • has to necessarily wait for the reverse data transmission

    • How long should the sender wait for an Ack to arrive ?

      • When would the sender decide to re-transmit

      • How to implement the Time-Out policy

Sliding window protocol
Sliding Window Protocol

  • Provides a mechanism for transmitter and receiver to remain synchronized, even when

    • frames get lost, mixed and/or corrupted

    • premature time-outs take place

  • Sending Window

    • sender node maintains a set of (consecutive ?) sequence numbers, corresponding to the frames that it can transmit at a given time

    • other frames cannot be transmitted, at that particular time

  • Receiver Window

    • maintains a set of sequence numbers of the frames that it can accept(other frame id#’s cannot be received at that time)

    • need not be the same size, or hold identical bounds as of the sender window

Sliding window protocol contd
Sliding Window Protocol (contd...)

  • Sender’s Window

    • Represents frames sent out, but not received Ack yet

    • Current window: (low, ..., high)

      • A new packet is sent out

        • Updated window: (low, ..., high+1)

        • Maximum window size based upon the buffer space, and tolerable max-delay

      • An Ack is received

        • Updated window: (low+1, ..., high)

      • Circular window management

  • Receiver’s Window

    • Represents frames the receiver may accept

    • Any frame outside this range is discarded without any Ack

Window attributes
Window Attributes

  • Receiver’s Window

    • Current window: (low, ..., high)

    • A frame (id# = low) is received

      • Updated window: (low+1, ..., high+1)

      • Request generation of an Ack signal for the arrived frame

      • Circular window management

    • No other frame may be received (analyze this !!)

    • Receiver’s window always stays at the same size (unlike Sender)

    • Example: figure 3-12

  • A 1-bit Sliding Window Protocol

    • Sender transmits a frame, waits for Ack, and send the next frame...

1 bit sliding window protocol
1-Bit Sliding Window Protocol

  • Sender’s window: maximum size = 1

  • Essentially, a Stop-and-Go protocol

    • Added features: frame id#, window synchronization

    • Refer Figure 3-13

    • Ignore the transmission time, i.e., basically assumes a small transmission delay

  • If the sender (node A) and receiver (node B) are in perfect synchronization

    • Case 1: Node B starts its first frame only after receiving node A’s first transmission

    • Case 2: Nodes A and B simultaneously initiate their transmissions

Synchronization in 1 bit sliding window protocol
Synchronization in 1-bit Sliding Window Protocol

  • Case 1: every frame is accepted

    • Reason: B’s first transmission had waited until A’s first tranmission reached B ==> B’s first transmission could bring with it the Ack for A’s first transmission

    • Otherwise, B’s first transmission would not bring with it A’s ACK

      • A will re-transmit, and keep re-transmitting until some packet from B brings with it an Ack for A’s first transmission

      • Leading to Case 2

  • Case 2: a large fraction of the transmissions are replicates

    • Refer Figure 3-14 (b)

      • Message format: (frame sequence #, ack for the frame with sequence #, frame or the actual data packet)

Added features of sliding window
Added Features of Sliding Window

  • Staying synchronized even when frames get mixed

    • The windows of sender (as well as receiver) have strict sequential patterns...

  • Pre-mature time-outs cannot create infinite transmission loop

    • Without the window markers:

      • A pre-mature time-out can cause a re-transmission

      • Before the Ack of the (2nd) re-transmission arrives, another pre-mature time-out can happen ==> 3rd re-transmission

      • Infinitely repeating...

    • With the window markers:

      • Receiver node will update its window at the first reception

      • Subsequent receptions will simply be rejected

  • Sliding window protocols: much more resilient to pathological cases

Sliding window protocols with non negligible transmission delay
Sliding Window Protocols with Non-Negligible Transmission Delay

  • Example: a 50 Kbps satellite channel with 500 msec transmission delay - transmit a 1000 bit packet

    • time 0 to 20 msec: the packet is being poured onto the channel

    • time 20 msec to 270 msec: forward journey to the satellite

    • time 270 msec to 520 msec: return journey to the ground station

    • Bandwidth utilization = 20 / 520 ==> about 4% !!

  • Rationale

    • Sender node had to wait for receiving an Ack of the current frame, before sending out the next frame...

    • Window size = 1

    • Approach for improving utilization: increase the Window size

Sliding with larger windows
Sliding with Larger Windows Delay

  • Example: a 50 Kbps satellite channel with 500 msec transmission delay - transmit a 1000 bit packet

    • time 0 to 20 msec: the packet is being poured onto the channel

    • time 20 msec to 270 msec: forward journey to the satellite

    • time 270 msec to 520 msec: return journey to the ground station

    • But, now the sender does not wait until 520-th msec before transmitting the second frame, third frame and so on

      • At time units: 20, 40, 60, ..., 500, 520, 540, ... msecs the sender transmits successive packets

      • Ack1 arrives about when Packet26 has just been sent out

      • “26” is arrived as 520 / 20, i.e., round trip transmission delay / single packet delay

    • Bandwidth utilization reaching near 100%

Pipelining Delay

  • Channel capacity = b bits/sec

  • Frame size = L bits

  • Round-trip propagation delay = R sec

  • Efficiency with Stop-and-Go protocol

    • (L/b) / [ (L/b) + R ] = L / ( L + bR )

  • If L < bR, efficiency will be less than 50%

  • In principle, Pipelining can always help

  • In practice, unless R is beyond a threshold, Pipelining isn’t worth the trouble

Pipelining with unreliable communication
Pipelining with Unreliable Communication Delay

  • What happens in a Packet-intermediate is lost or corrupted ?

    • SenderReceiverpacket 1packet 2packet 3 recd Packet 1packet 4 recd Packet 2packet 5 recd Packet 3.....

  • Difficulties

    • Large number of succeeding frames, sent in pipelined format, will now arrive at the receiver with one (or, few) missing front packets

    • This is a problem for the Network Layer @ receiver, since it wants the packets/frames in order. Also, the receiver application will pause.

    • Problem for the sender, since it has to slide back...

Two solutions
Two Solutions Delay

  • Problem Instance

    • A series of packets, 1, 2, 3, ..., N is received @ the Receiver node

    • Receiver finds that packet 1 is faulty

    • After a while, sender receives the negative Ack (or has a time-out, if no Ack is sent, or Ack is lost in reverse journey)

  • Solution 1: “go back N”

    • Receiver discards packets 2 thru N (although they were correct)

    • Sender will, sooner or later, re-transmit Packet 1

      • Receiver waits for the re-transmitted (hopefully, correct) Packet 1

    • Sender will follow up with Packets 2 through N again

    • Refer Figure 3-15(a)

    • Corresponds to a Receiver Window Size = 1

Two solutions contd
Two Solutions (contd...) Delay

  • Performance concern in “go back N”

    • Why discard Packets 2 thru N, even if they were correctly transmitted ?

    • For simplicity, easier network management, ...

      • But, certainly not being performance conscious

      • A performance aware system might

        • Hold packets 2 thru N in some temporary buffer

        • Request the sender to re-transmit (a correct version of) Packet 1

        • Upon receiving Packet 1 (new copy), re-construct the packet sequence

          • Packet 1 (new), Packet 2 (old), Packet 3 (old), ..., Packet N(old), Packet N+1 (new), Packet N+2 (new), ...

        • This is basically a selective repeat, where packet 1 was repeated in above

  • Solution 2: Selective Repeat

    • Store all the correct Packets, following the bad one

Selective repeat
Selective Repeat Delay

  • Selective Repeat

    • Refer Figure 3-15(b)

    • Store all the packets following the bad one

    • Wait until sender re-transmits a corrected copy of the bad one

    • Re-shuffle the packets and construct the original packet sequence

    • Corresponds to an window size > 1

  • Performance Concerns

    • Delay in receiving the packets, avg packet delay comparisons

    • What if multiple failures occur within the window

      • In Figure 3-15(b), what if Packet 5 is also a failure ?

    • Go through the pseudo codes in 3-16, and 3-18

Protocol verification
Protocol Verification Delay

  • Objective

    • Given a protocol description, find out (with certainty) whether it works (correctness, performance, deadlock etc)

    • The protocol desccription may be in algorithms, or programs

    • Needed due to the critical role the networking subsystem may play in your missin-critical system

      • Deterministic guarantee may be required for applications

      • Find out if the network does what it is supposed to, and nothing else

  • Approach

    • Input a protocol specification, where the specification can be in various formats

    • Apply an analysis technique over the specification to evolve the assertions to be verified

Protocol specification
Protocol Specification Delay

  • Specification using a program segment

    • pseudo code

    • structured diagram, flowchart

    • actual program, e.g., C++ code

  • Issues and tradeoff

    • how abstract, or compact the specification is ?

      • easy enough to understand, good enough for all the necessary details

    • what is the semantics to describe inter-node communication ?

      • using Send/Rceive within the code

      • using CSP notation

    • how does one describe critical, but apparently hidden details

      • for example, buffer space, clock skew, likelihood of failure, ...

Verification using an abstraction
Verification: using an Abstraction Delay

  • Protocol Verification

    • Similar to software testing, particularly if the protocol is described in software (i.e., actual code)

    • Expensive, need to be extensive, and can seldom be perfectly comprehensive

      • Software engineering aspects in Testing, where you can test “most” things but not always 100% (unless the overhead of testing is excessive)

    • Solution: use an abstracted form of representation of the protocol

      • Test key features, or characteristics of the protocol using the abstracted version

        • Testing process will be easier, cheaper, and more conceptual

        • However, likely to be “less convincing”


Does it work correctly ?

Abstracted Protocol Representation



Abstracted protocol specification
Abstracted Protocol Specification Delay

  • Abstraction Approaches

    • Use state machine, corresponding to the pseudo-code, flowchart, actual program

    • Use Petri-Nets, corresponding to the run-time snapshots of the protocol (complimentary concept to FSM)

    • others...

  • Types of Assertions

    • Reachability

    • Deadlock

    • Buffer Overflow

    • Timing assertions, causality assertions, security assertions, ...

  • Mode of checking the assertions: Static, or Dynamic

    • Use Dynamic assertions when static check is too expsneive (state-space graph)

Finite state machine verification
Finite State Machine Verification Delay

  • States

    • Each node (sender, receiver, intermediate repeater) has a state

    • Each channel has a state, defined by its content

    • The set of all states in the FSM

      • All combinations (cross product) of {node states}, {channel states}

      • 1-to-1 single hop communication:

        • (sender states) X (receiver states) X (link states)

      • 1-to-1 multi-hop communication:

        • (sender states) X (receiver states) X (link1 states) X (link2 states) X ... X (linkN states)

        • Question: what if intermediate nodes get involved ?

      • 1-to-many multihop communication:

        • Cross product of the states in Sender, each Destination, each Channel involved, each Intermediate Node if they get involved, ...

Fsm verification contd
FSM Verification (contd...) Delay

  • Transitions among the States, triggered by event(s)

    • Events at the Protocol Machines (i.e., sender, receiver)

      • a data becomes ready to be sent out, the data is actually sent out, or the data reaches the destination, ...

    • Events at the channels

      • Insertion of a data, delivery of a data, loss of a data (due to noise)

  • Starting state

    • When each one of the Protocol Machines, and the Channel States are initiated

  • Static assertion checking approaches

    • Mostly, graph theoretic - well known available algorithms for reachability, deadlock detection, event causality check, ...

  • Dynamic assertion checking approaches: selective graph prunning

Petri net based verification
Petri Net based Verification Delay

  • Four basic elements of a Petri Net

    • Places, Transitions, Arcs and Tokens

    • Place: somewhat like a state, at which the system may reside at a given point in time. (Different parts of the system may simultaneously hold multiple states.)

      • Token: indicator of the system being in that State

        • Approach: somewhat like a run-time snapshot of a FSM

    • Transitions: migration from one Place to another Place

      • Arcs: individual constituents of a Transition

      • Logical (e.g., And, OR, ...) relationships between Arcs to constitute the Transitions

        • Added capability beyond FSM

  • Verification approach

    • Petri-Net representation of the Protocol

      • Messages and channels are captured easily in this, compared to FSM

    • PN analysis algorithms (essentially, Graph Analysis) - search for ‘Murata’

Example data link layer the atm
Example Data Link Layer - The ATM Delay

  • ATM Layer

    • Does not explicitly state any physical layer characteristics

    • Typically supported over SONET, TAXI, ..., can be FDDI

  • ATM Cells

    • 5 + 48 bytes ==> 5 byte is the Cell overhead

      • 4 bytes for VC identification, control information, ...

      • 1 byte as a checksum, but checksum only the Header (not the actual Data)

        • An error in the Header is more serious -- causing wrong-destination delivery, ...

      • Correctness of the actual Data - implemented from higher layer

        • Mostly, the physical media = optical

        • Too few bit errors to worry for the Data field

          • For many applications (video) of high-bandwidth, few bit errors may be acceptable

        • If something go wrong in the Data field ==> upper layers can correct it

Atm contd
ATM (contd..) Delay

  • Cells include both Data cells, as well as network management Cells (OAM: Operations and Maintenance)

  • Cells can be sent out synchronous or asynch, based on the lower layer

    • Synchronous transmission can land up with idle cells

  • Most typically, ATM is hosted over SONET

    • SONET provide an wonderful built-in synchronization

    • SONET frame overhead: 10 bits (bytes) out of 270 bits (bytes)

      • Leading to an utilization = 26/27

      • ATM layer must match this. Approach: Every 27th cell is an OAM cell

    • Some popular data rates: OC-3 = 155.52 Mbps

      • OC-12 is typical today in most state-of-the-art ATM networks, like the ATDNet