Cs640 introduction to computer networks
Sponsored Links
This presentation is the property of its rightful owner.
1 / 20

CS640: Introduction to Computer Networks PowerPoint PPT Presentation


  • 78 Views
  • Uploaded on
  • Presentation posted in: General

CS640: Introduction to Computer Networks. Aditya Akella Lecture 20 - Queuing and Basics of QoS. Queuing Disciplines. Each router must implement some queuing discipline Scheduling discipline Drop policy Queuing allocates both bandwidth and buffer space:

Download Presentation

CS640: Introduction to Computer Networks

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript


CS640: Introduction to Computer Networks

Aditya Akella

Lecture 20 -

Queuing and Basics of QoS


Queuing Disciplines

  • Each router must implement some queuing discipline

    • Scheduling discipline

    • Drop policy

  • Queuing allocates both bandwidth and buffer space:

    • Bandwidth: which packet to serve (transmit) next

    • Buffer space: which packet to drop next (when required)

  • Queuing also affects latency


Typical Internet Queuing

  • FIFO + drop-tail

    • Simplest choice

    • Used widely in the Internet

    • FIFO: scheduling discipline

    • Drop-tail: drop policy

  • FIFO (first-in-first-out)

    • Implies single class of traffic, no priority

  • Drop-tail

    • Arriving packets get dropped when queue is full regardless of flow or importance


FIFO + Drop-tail Problems

  • Lock-out problem

    • Allows a few flows to monopolize the queue space

    • Send more, get more  No implicit policing

  • Full queues

    • TCP detects congestion from loss

    • Forces network to have long standing queues in steady-state

    • Queueing delays – bad for time sensitive traffic

    • Synchronization: end hosts react to same events

      • Full queue  empty  Full  empty…

  • Poor support for bursty traffic


Lock-out Problem

  • Priority queueing can solve some problems

    • Starvation

    • Determining priorities is hard

  • Simpler techniques: Random drop

    • Packet arriving when queue is full causes some random packet to be dropped

  • Drop front

    • On full queue, drop packet at head of queue

  • Random drop and drop front solve the lock-out problem but not the full-queues problem


Random Early Detection (RED)

  • Drop packets before queue becomes full (early drop)

  • Detect incipient congestion

  • Avoid window synchronization

    • Randomly mark packets

  • Random drop helps avoid bias against bursty traffic


RED Algorithm

  • Maintain running average of queue length

  • If avg < minth do nothing

    • Low queuing, send packets through

  • If avg > maxth, drop packet

    • Protection from misbehaving sources

  • Else mark packet in a manner proportional to queue length

    • Notify sources of incipient congestion


RED Operation

Min thresh

Max thresh

Average Queue Length

P(drop)

1.0

maxP

minth

maxth

Avg queue length


Fair Queuing: Goals

  • How do you protect the most important packets?

    • How do you provide some isolation in general?

    • Simple priority queuing does not help

  • Two approaches:

    • Fair Queuing (in itself is sufficient)

    • Leaky bucket (with other techniques which we will cover next class)

  • FQ key goal: Allocate resources “fairly”

    • Keep separate queue for each flow

  • Isolate ill-behaved users

  • Still achieve statistical muxing

    • One flow can fill entire pipe if no contenders

    • Work conserving scheduler never idles link if it has a packet


What is “Fairness”?

  • At what granularity?

    • Flows, connections, domains?

  • What if users have different RTTs/links/etc.

    • TCP is “RTT-Fair”

      • BW inversely proportional to RTT of flow

    • Should they share a link equally or be TCP-fair?

  • Maximize fairness index?

    • Fairness = (Sxi)2/n(Sxi2) 0<fairness<1


Max-min Fairness

  • Allocate user with “small” demand what it wants, evenly divide unused resources to “big” users

  • Formally:

    • Resources allocated in terms of increasing demand

    • No source gets resource share larger than its demand

    • Sources with unsatisfied demands get equal share of resource


Implementing Max-min Fairness

  • Use separate queues per flow

    • Round-robin scheduling across queues

  • Why not simple round robin at packet level?

    • Variable packet length  can get more service by sending bigger packets

  • Ideally: Bitwise round robin among all queues


Bit-by-bit RR Illustration

  • Not feasible to interleave bits on real networks

    • FQ simulates bit-by-bit RR


Bit-by-bit RR Simulation

  • Single flow: clock ticks when a bit is transmitted. For packet i:

    • Pi = length, Ai = arrival time, Si = begin transmit time, Fi = finish transmit time

    • Fi = Si+Pi = max (Fi-1, Ai) + Pi

  • Multiple flows: clock ticks when a bit from all active flows is transmitted  round number

    • Can calculate Fi for each packet if number of flows is know at all times

      • Why do we need to know flow count?  need to know A  This can be complicated


Fair Queuing

  • Mapping bit-by-bit schedule onto packet transmission schedule

  • Transmit packet with the lowest Fi at any given time


FQ Illustration

Flow 1

Flow 2

I/P

O/P

Flow n

Variation: Weighted Fair Queuing (WFQ)  Flows can have weightKey to QoS, as we will see in next class.


Output

Flow 1

(arriving)

Flow 2

transmitting

Output

F=10

F=2

No Pre-emption

Flow 1

Flow 2

F=10

F=8

F=5

Cannot preempt packet

currently being transmitted


Fair Queuing Tradeoffs

  • FQ can control congestion by monitoring flows

    • Need flows to be adaptive to avoid congestion collapse

  • Complex state

    • Must keep queue per flow

      • Hard in routers with many flows (e.g., backbone routers)

      • Flow aggregation is a possibility (e.g. do fairness per domain)

  • Complex computation

    • Classification into flows may be hard

    • Must keep queues sorted by finish times

    • Must track number of flows at fine time scales


Token Bucket for Traffic Policing at the Network Edge

Tokens

Tokens enter bucket at rate r

Overflow

Tokens

Bucket depth b: capacity of bucket

Packet

Enough tokens 

packet goes through,

tokens removed

Tokens

Packet

Not enough tokens  wait for tokens to accumulate


Token Bucket Characteristics

  • On the long run, rate is limited to r

  • On the short run, a burst of size b can be sent

  • Amount of traffic entering at interval T is bounded by:

    • Traffic = b + r*T

    • Can provide a lose sense of isolation among flows.

      • Especially because the send rate of each flow is throttled at the source

      • Still need some mechanism within the network to ensure performance guarantees


  • Login