1 / 47

Max Min Fairness

Max Min Fairness. How define fairness? “ Any session is entitled to as much network use as is any other ” … .unless some sessions can use more without hurting others Other definitions Network usage depends on the resource consumption by the session Pay/bid for what you use. A Simple Example.

denali
Download Presentation

Max Min Fairness

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Max Min Fairness • How define fairness? • “Any session is entitled to as much network use as is any other” • ….unless some sessions can use more without hurting others • Other definitions • Network usage depends on the resource consumption by the session • Pay/bid for what you use

  2. A Simple Example • Max-min allocation: 1/3, 1/3, 1/3, 2/3

  3. How to Calculate Max Min Flow Share Fluid model: • Increase the flow until some pipe fills-up. • Fix the bandwidth of the bottleneck flows • Continue with the unfixed flows Can be done efficiently by calculating the bottleneck link at step 1

  4. Buffer management and admission control Simplest admission policy: • Accept packets until buffer is full (tail drop) However: • Tail drop is not kind to TCP flows • RED can be used to avoid tail drop

  5. Reminder: Hallelujah for RED • Random early detection (RED) makes three improvements • Metric is moving average of queue lengths • small bursts pass through unharmed • only affects sustained overloads • Packet drop probability is a function of mean queue length • prevents severe reaction to mild overload • Can mark packets instead of dropping them • allows sources to detect network state without losses • RED improves performance of a network of cooperating TCP sources • No bias against bursty sources • Controls queue length regardless of endpoint cooperation ECN

  6. Drop probability RED gentle RED max-p Average Q size Max-thresh 2*Max-thresh How does it work? 1

  7. So problem is solved? • Fairly easy to implement in hardware! • Can work in wire-speed! • All we need to do is set the parameters…..right  • Turns out there is no universal good set of parameters • Some studies show RED has NO advantage over tail drop. WHY?

  8. parameters avgQ= (1-wq)avgQ+wqq Floyd-Jacobson: • Wq: 0.002, not less than 0.001 • max_p: 1/50, • max_th: at least twice min_th • max_th-min_th larger than the q increase in RTT • Future work …..

  9. So does it help us to surf? Tuning RED for Web Traffic, Christiansen et al., SIGCOMM 2000 • compared to a (properly configured) FIFO queue, RED has a minimal effect on HTTP response times for offered loads up to 90% of link capacity, • response times at loads in this range are not substantially effected by RED control parameters, • between 90% and 100% load, RED can be carefully tuned to yield performance somewhat superior to FIFO, however, response times are quite sensitive to the actual RED parameter values selected, and • in such congested networks, RED parameters that provide the best link utilization produce poorer response times.

  10. SPRINT study (Diot et al.) • A parallel study, presented at NANOG 2000 • Testbed • with CISCO routers (7500) • with Dummynet • used “recommended” RED and GRED parameters • Heterogeneous delays (120 to 180 ms)

  11. Traffic characteristics • 16 to 256 TCP connections sharing the bottleneck. • Experimental traffic generated by Chariot • long-lived TCP connections. • more “realistic” traffic mix: • 90% short lived TCP connections (up to 20 packets) • 10 % long lived TCP connections • 1Mbps UDP in both cases

  12. Testbed (CISCO routers) 7500 7500 10 Megs

  13. Testbed (Dummynet) 7500 7500 10 Megs Dummy net 100 Megs

  14. What is Dummynet? application dummynet network

  15. Metrics observed • Aggregate goodput through a router • TCP and UDP loss rate • Consecutive losses • Queuing behavior

  16. Aggregate goodput (long-lived TCP)

  17. 256 short and long lived TCP connections

  18. Consecutive packet losses (long lived)

  19. …if we use “optimal” RED parameters

  20. Consecutive packet losses (realistic traffic mix)

  21. Queuing behavior (256 long lived connections)

  22. Queuing behavior (256 connections, realistic mix)

  23. Diot’s summary • No significant difference on goodput, TCP losses and UDP losses. • On consecutive losses, clear advantage to GRED and GRED-I. • “gentle” modification solves many RED problems. • Oscillations: no clear winner. Traffic seems to be the determining factor.

  24. From the ISP standpoint ... • Not clear there is an advantage in deploying RED, GRED, or GRED-I. • Maybe GRED-I is an option if one can find a “universal” exponential dropping function. • ECN will work with any scheme. • Not clear the solution is in the AQM space.

  25. GRED-I with exponential dropping function 1 buffer size

  26. Deficit Round Robin (DRR) • A modification to WRR to overcome different packet sizes. • No need to know the average packet size. • Each Q is associated with a deficit counter. • Initiated to 0. • Holds the Q’s deficit in service

  27. DRR Algorithm: • If size(HOL packet) ≤ quota + deficit • Send packet • deficitdeficit+quota-(packet size) • Else • Don’t send packet • deficitdeficit+quota

  28. About Fair Queuing ... • Not only feasible … easy at the edges! • www.agere.com (an example) • vendors support from 64k to 200k flows • Really fair • everybody gets what he/she paid for • local signaling (end host to CPE)

  29. fair queueing at the edge Core-stateless fair queueing • WFQ is hard to do at the core • Edge routers estimate rate and label packets • Core routers maintain FIFO queues and drop based on label

  30. CSFQ summary • Better than FIFO and RED • Similar to FRED • Not as good as DRR

  31. Rainbow fair queueing • Similar to CSFQ • Have similar performance as CSFQ • Enable applications to mark packets and achieve better goodput

  32. Rainbow Fair Queueing (RFQ) • Example • A: 10 Kbps B: 6 Kbps C: 8 Kbps • Each layer: 2 Kbps

  33. RFQ: basic mechanism (1) the estimation of the flow arrival rate at the edge routers (2) the selection of the rates for each color (3) the assignment of colors to packets (4) the core router algorithm

  34. Rainbow Fair Queueing (RFQ) (1) the estimation of the flow arrival rate at the edge routers • rinew: arrival rate • tik: arrival time of flow I • lik: length of the kthpacket of flow I • K: a constant • Tik = tik– tik-1

  35. Rainbow Fair Queueing (RFQ) (2) the selection of the rates for the rates for each color • ci: i color average rate of packets • N: total number of colors and multiple of b • a,b: determine the block structure • P: the maximum flow rate in the network

  36. Rainbow Fair Queueing (RFQ): Example • N=8 a=b=2 c0 c1 c2 c3 c4 c5 c6 c7 P/16 P/16 P/16 P/16 P/8 P/8 P/4 P/4

  37. Rainbow Fair Queueing (RFQ) (3) the assignment of colors to packets • Suppose the current estimate of the flow arrival rate is r, and j is the smallest value satisfying . • Then the current packet is assigned color with probability .

  38. (4) the core router algo. • Conditions to decrease color: • q threshold • Flow bw • Positive gradient • Hold you horses • Conditions to increase color • Time • Flow below service rate

  39. Rainbow Fair Queueing (RFQ) • Weighted RFQ • wi: weight for flow i • cj = wicj

  40. Simulations: A single congested link

  41. Fairness: flow i sends at 0.313i

  42. Throughput: TCP flow

  43. Throughput: UDP flows

  44. Control Responsiveness10Mbps: 8x1M7x1M+8M

  45. Simulations: Performance Effects of Buffer Size

  46. Simulations: TCP Performance Under Various round Trip Delay

More Related