1 / 32

Cloud Control with Distributed Rate Limiting

Cloud Control with Distributed Rate Limiting. Barath Raghavan , Kashi Vishwanath , Sriram Ramabhadran , Kenneth Yocum , and Alex C. Snoeren University of California, San Diego. Centralized network services. Hosting with a single physical presence However, clients are across the Internet.

arvizuj
Download Presentation

Cloud Control with Distributed Rate Limiting

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Cloud Control withDistributed Rate Limiting Barath Raghavan, KashiVishwanath, SriramRamabhadran, Kenneth Yocum, and Alex C. Snoeren University of California, San Diego

  2. Centralized network services • Hosting with a single physical presence • However, clients are across the Internet

  3. Running on a cloud • Resources and clients are across the world • Services combine these distributed resources 1 Gbps

  4. Key challenge We want to control distributed resources as if they were centralized

  5. Ideal: Emulate a single limiter • Make distributed feelcentralized • Packets should experience same limiter behavior Limiters S D 0 ms S D 0 ms 0 ms S D

  6. Distributed Rate Limiting (DRL) Achievefunctionally equivalent behavior to a central limiter 1 2 Flow Proportional Share 3 Global Token Bucket Global Random Drop Packet-level (general) Flow-level (TCP specific)

  7. Distributed Rate Limiting tradeoffs Accuracy (how close to K Mbps is delivered, flow rate fairness) + Responsiveness (how quickly demand shifts are accommodated) Vs. Communication Efficiency (how much and often rate limiters must communicate)

  8. DRL Architecture Limiter 1 Estimate interval timer Limiter 2 Packet arrival Gossip Limiter 3 Enforce limit Estimate local demand Gossip Gossip Limiter 4 Global demand Set allocation

  9. Token Buckets Packet Token bucket, fill rate K Mbps

  10. Building a Global Token Bucket Demand info (bytes/sec) Limiter 1 Limiter 2

  11. Baseline experiment Limiter 1 3 TCP flows Single token bucket D S 10 TCP flows D S Limiter 2 7 TCP flows D S

  12. Global Token Bucket (GTB) (50ms estimate interval) Global token bucket Single token bucket 10 TCP flows 7 TCP flows 3 TCP flows Problem: GTB requires near-instantaneous arrival info

  13. Global Random Drop (GRD) Limiters send, collect global rate info from others 5 Mbps (limit) 4 Mbps (global arrival rate) Case 1: Below global limit, forward packet

  14. Global Random Drop (GRD) 6 Mbps (global arrival rate) 5 Mbps (limit) Same at all limiters Case 2: Above global limit, drop with probability: Excess Global arrival rate 1 6 =

  15. GRD in baseline experiment (50ms estimate interval) Global random drop Single token bucket 10 TCP flows 7 TCP flows 3 TCP flows Delivers flow behavior similar to a central limiter

  16. GRD with flow join (50ms estimate interval) Flow 2 joins at limiter 2 Flow 3 joins at limiter 3 Flow 1 joins at limiter 1

  17. Flow Proportional Share (FPS) Limiter 1 3 TCP flows D S Limiter 2 7 TCP flows D S

  18. Flow Proportional Share (FPS) Goal: Provide inter-flow fairness for TCP flows “3 flows” Local token-bucket enforcement “7 flows” Limiter 1 Limiter 2

  19. Estimating TCP demand Limiter 1 1 TCP flow S D 1 TCP flow S Limiter 2 3 TCP flows D S

  20. Estimating TCP demand Local token rate (limit) = 10 Mbps Flow A = 5 Mbps Flow B = 5 Mbps Flow count = 2 flows

  21. Estimating TCP demand Limiter 1 1 TCP flow S D 1 TCP flow S Limiter 2 3 TCP flows D S

  22. Estimating skewed TCP demand Local token rate (limit) = 10 Mbps Flow A = 2 Mbps Bottlenecked elsewhere Flow B = 8 Mbps Flow count ≠ demand Key insight: Use a TCP flow’s rate to infer demand

  23. Estimating skewed TCP demand Local token rate (limit) = 10 Mbps Flow A = 2 Mbps Bottlenecked elsewhere Flow B = 8 Mbps 10 8 Local Limit Largest Flow’s Rate = = 1.25 flows

  24. Flow Proportional Share (FPS) Global limit = 10 Mbps Limiter 1 Limiter 2 1.25 flows 2.50 flows Global limit x local flow count Total flow count Set local token rate = 10 Mbps x 1.25 1.25 + 2.50 = = 3.33 Mbps

  25. Under-utilized limiters Limiter 1 1 TCP flow S D 1 TCP flow S Wasted rate Set local limit equal to actual usage (limiter returns to full utilization)

  26. Flow Proportional Share (FPS) (500ms estimate interval)

  27. Additional issues • What if a limiter has no flows and one arrives? • What about bottlenecked traffic? • What about varied RTT flows? • What about short-lived vs. long-lived flows? • Experimental evaluation in the paper • Evaluated on a testbed and over Planetlab

  28. Cloud control on Planetlab • Apache Web servers on 10 Planetlab nodes • 5 Mbps aggregate limit • Shift load over time from 10 nodes to 4 nodes 5 Mbps

  29. Static rate limiting Demands at 10 apache servers on Planetlab Wasted capacity Demand shifts to just 4 nodes

  30. FPS (top) vs. Static limiting (bottom)

  31. Conclusions • Protocol agnosticlimiting (extra cost) • Requires shorter estimate intervals • Fine-grained packet arrival info not required • For TCP, flow-level granularity is sufficient • Many avenues left to explore • Inter-service limits, other resources (e.g. CPU)

  32. Questions!

More Related