1 / 10

Random Early Detection (RED)

Random Early Detection (RED). Router notifies source before congestion happens - just drop the packet (TCP will timeout and adjust its window) - could make explicit by marking the packet Early random drop

umika
Download Presentation

Random Early Detection (RED)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Random Early Detection (RED) • Router notifies source before congestion happens - just drop the packet (TCP will timeout and adjust its window) - could make explicit by marking the packet • Early random drop - rather than wait for queue to become full, drop each arriving packet with some drop probability whenever the average queue length exceeds some drop level • RED details (normal, congestion avoidance, congestion control) - compute average queue length as weighted average of queue lengths each time a packet arrives - If average is less than minimum threshold, enqueue the packet - If average exceeds maximum threshold, drop arriving packet - Otherwise, drop arriving packet with probability P

  2. RED • P is a function of average queue length - small bursts pass through unharmed - only affects sustained overloads - P also depends on how long since last drop; RED counts new packets that have been queued while average length has been between the two thresholds - TempP = MaxP * (AvgLen - MinThreshold)/(MaxThreshold - MinThreshold) P = TempP / (1 - count * TempP) • Cooperative sources reduce their rate and get lower overall delays, uncooperative sources get severe packet loss • Random drops help avoid global synchronization, which occurs when hundreds or thousands of flows back off and go into slow start at roughly the same time

  3. More on RED • RED is a congestion avoidance mechanism (rather than congestion control), i.e. it predicts when congestion is about to happen, and reduce the rate at which hosts send data just before queue saturation • Probability of dropping a particular flow’s packet(s) is roughly proportional to the share of the bandwidth that flow is currently getting • Max P is typically set to 0.02 • If traffic is bursty, then min threshold should be sufficiently large to allow link utilization to be maintained at an acceptably high level • Difference between two thresholds should be larger than the typical increase in calculated average queue length in one RTT; setting max threshold to twice min threshold seems reasonable • RED is fair; can’t differentiate services with fairness • Unfair or weighted RED: lower drop probability for higher priority traffic. Priorities set at edge routers or hosts

  4. A Controlled-Load Scheme • A connection has to specify its Tspec in terms of token bucket rate and depth • If there aren’t enough tokens present at the time of transmission, the packet is treated as non-conformant • Non-conformant and best-effort traffic injected into network unmarked • Conformant traffic is marked • At routers, an enhanced RED (ERED) FIFO is used • Marked packets have a lower drop probability • Admission control ensures sum of rates does not exceed link capacity • Designed for TCP based applications (assumes TCP-reno)

  5. Effect of Service Rate • Compliant throughput observed to be less than the reserved rate of service • Not all tokens generated are used to mark packets at the source, or token buffer overflows • TCP’s congestion control is overly conservative and does not exploit the reservation the connection has • TCP ceases to transmit due to ack compression (or gaps between successive acknowledgments) • In fast recovery, TCP cuts its window in half by halting additional transmissions until half the original window clears the network • Gaps in ack stream also form due to normal dynamics of network traffic

  6. Effect of Token Bucket Depth • Solution: increase token bucket depth • But, burst losses can occur • Or, more buffers in the network are needed • Large token buckets do not help • Unless we have better admission control!

  7. TCP Adaptations • Adapt acknowledgment-based transmit triggers to rate-based reservation paradigm • Delayed transmissions: a segment is held back for a random amount of time when there aren’t enough tokens, thus reducing drop probability of packets • Not very effective in the presence of reverse path congestion • Timed transmissions: whenever a periodic timer expires, if there are sufficient tokens, sender sends packet as marked, ignoring congestion window (still honor receiver advertised window) • Non-compliant packets are sent only if there is room under the congestion window • A connection receives its reserved rate • But, does not share excess bandwidth equally with best-effort • Connections with larger reservations are penalized more

  8. Rate Adaptive Windowing • Congestion window consists of reserved part (RWND) and variable part (CWND) • RWND equals reserved rate times estimated RTT • CWND tries to estimate residual capacity and share it with other active connections • In fast recovery, set CWND to RWND + (CWND-RWND)/2 • In slow start, set CWND to RWND + 1, set SSTHRESH to minimum of RWND + (CWND-RWND)/2 and AWND • Scale window increase by (CWND-RWND)/CWND • Sender still sends minimum of CWND and AWND • Size of receiver’s buffer must be at least equal RWND to sustain reserved rate using TCP

  9. Comments • Token bucket depth should be at least TimerInterval times TokenBucketRate • As timer interrupt interval increases, token bucket size (burst size) increases • Burst losses cause throughput degradation • Using fine-grained timers allow applications to request smaller token buckets • ERED can be embedded as a class in CBQ to provide weighted bandwidth sharing among connections • Many multimedia applications do not require reliability of TCP, but results can be extended to TCP-friendly RTP-based applications • Admission control needs to consider bucket sizes, router buffer sizes, ERED parameters

  10. Class Based Queueing • Hierarchical link sharing - a priori hierarchical partitioning of a link’s bandwidth among organizations, traffic classes, or protocol families • A class can continue unregulated if - the class is not overlimit (i.e. using more than its share) OR - the class has a not-overlimit ancestor at level i, and there are no unsatisfied classes at levels lower than i; a class is unsatisfied if underlimit and has a persistent backlog

More Related