1 / 46

by liyong

Data Center TCP (DCTCP). Mohammad Alizadeh, Albert Greenberg, David A. Maltz, Jitendra Padhye Parveen Patel, Balaji Prabhakar, Sudipta Sengupta, Murari Sridharan. Microsoft Research, Stanford University.

tala
Download Presentation

by liyong

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Data Center TCP (DCTCP) Mohammad Alizadeh, Albert Greenberg, David A. Maltz, Jitendra Padhye Parveen Patel, Balaji Prabhakar, Sudipta Sengupta, Murari Sridharan Microsoft Research, Stanford University by liyong

  2. OutLook • Introduction • Communications in Data Centers • The DCTCP Algorithm • Analysis and Experiments • Conclusion

  3. Introduction • Communications in Data Centers • The DCTCP Algorithm • Analysis and Experiments • Conclusion

  4. Data Center Packet Transport • Cloud computing service provider • Amazon,Microsoft,Google Need to build highly available, highly performant computing and storage infrastructure using low cost, commodity components

  5. we focus on soft real-time applications • Supporting • Web search • Retail • Advertising • Recommendation • Require three things from the data center network • Low latency for short flows • High burst tolerance • High utilization for long fows

  6. Two major contributions • First • Measure and analyze production traffic • Extracting application patterns and needs • Impairments that hurt performance are identified • Second • Propose Data Center TCP (DCTCP) • Evaluate DCTCP at 1 and 10Gbps speeds on ECN-capable commodity switches

  7. Production traffic • >150TB of compressed data,collected over the course of a month from ~6000 servers • The measurements reveal that 99.91% of traffic in our data center is TCP traffic • The traffic consists of three kinds of traffic —query traffic (2KB to 20KB in size) — delay sensitive short messages (100KB to 1MB) —throughput sensitive long flows (1MB to 100MB) • Our key learning from these measurements is that to meet the requirements of such a diverse mix of short and long flows, switch buffer occupancies need to be persistently low, while maintaining high throughput for the long flows. • DCTCP is designed to do exactly this.

  8. TCP in the Data Center • We’ll see TCP does not meet demands of apps. • Incast • Suffers from bursty packet drops • Not fast enough utilize spare bandwidth • Builds up large queues: • Adds significant latency. • Wastes precious buffers, esp. bad with shallow-buffered switches. • Operators work around TCP problems. • Ad-hoc, inefficient, often expensive solutions • Our solution: Data Center TCP

  9. Introduction • Communications in Data Centers • The DCTCP Algorithm • Analysis and Experiments • Conclusion

  10. Partition/Aggregate Application Structure

  11. Partition/Aggregate Deadline = 250ms MLA MLA TLA Picasso • Time is money • Strict deadlines (SLAs) • Missed deadline • Lower quality result ……… 1. Art is a lie… 1. 1. Deadline = 50ms 2. The chief… 2. Art is a lie… 2. Art is… ….. 3. ….. ….. 3. 3. Picasso “I'd like to live as a poor man with lots of money.“ “The chief enemy of creativity is good sense.“ “Computers are useless. They can only give you answers.” “Bad artists copy. Good artists steal.” “Art is a lie that makes us realize the truth. “It is your work in life that is the ultimate seduction.“ “Inspiration does exist, but it must find you working.” “Everything you can imagine is real.” Deadline = 10ms Worker Nodes

  12. Workloads • Partition/Aggregate (Query) • Short messages [50KB-1MB] (Coordination, Control state) • Large flows [1MB-50MB] (Data update) Delay-sensitive Delay-sensitive Throughput-sensitive

  13. Impairments

  14. Switches • Like most commodity switches in clusters are shared memory switches that aim to exploit statistical multiplexing gain through use of logically common packet buffers available to all switch ports. • Packets arriving on an interface are stored into a high speed multi-ported memory shared by all the interfaces. • Memory from the shared pool is dynamically allocated to a packet by a MMU(attempts to give each interface as much memory as it needs while preventing unfairness by dynamically adjusting the maximum amount of memory any one interface can take). • Building large multi-ported memories is very expensive, so most cheap switches are shallow buffered, with packet buffer being the scarcest resource. The shallow packet buffers cause three specific performance impairments,which we discuss next.

  15. Incast Worker 1 • Synchronized mice collide. • Caused by Partition/Aggregate. Aggregator Worker 2 Worker 3 RTOmin = 300 ms Worker 4 TCP timeout

  16. Queue Buildup Sender 1 • Big flows buildup queues. • Increased latency for short flows. Receiver Sender 2 • Measurements in Bing cluster • For 90% packets: RTT < 1ms • For 10% packets: 1ms < RTT < 15ms

  17. Buffer Pressure • Indeed, the loss rate of short flows in this traffic pattern depends on the number of long flows traversing other ports • The bad result is packet loss and timeouts, as in incast, but without requiring synchronized flows.

  18. Data Center Transport Requirements • High Burst Tolerance • Incast due to Partition/Aggregate is common. • Low Latency • Short flows, queries • 3. High Throughput • Large file transfers The challenge is to achieve these three together.

  19. Balance Between Requirements High Throughput Low Latency High Burst Tolerance • Deep Buffers: • Queuing Delays • Increase Latency • Shallow Buffers: • Bad for Bursts & • Throughput Objective: Low Queue Occupancy & High Throughput DCTCP • AQM – RED: • Avg Queue Not Fast • Enough for Incast • Reduced RTOmin (SIGCOMM ‘09) • Doesn’t Help Latency

  20. Introduction • Communications in Data Centers • The DCTCP Algorithm • Analysis and Experiments • Conclusion

  21. Review TCP Congestion Control • Four Stage: • Slow Start • Congestion Avoidance • Quickly Retransmission • Quickly Recovery • Router must maintain one or more queues on port, so it is important to control queue • Two queue control algorithm • Queue Management Algorithm: manage the queue length through dropping packets when necessary • Queue Scheduling Algorithm: determine the next packet to send

  22. Two queue management algorithm • Passive management Algorithm: dropping packets after queue is full. • Traditional Method • Drop-tail • Random drop • Drop front • Some drawbacks • Lock-out: several flows occupy queue exclusively, prevent the packets from others flows entering queue • Full queues: send congestion signals only when the queues are full, so the queue is full state in quite a long period • Active Management Algorithm(AQM)

  23. AQM (dropping packets before queue is full) • RED(Random Early Detection)[RFC2309] • Calculate the average queue length(aveQ): Estimate the degree of congestion • Calculate probability of dropping packets (P): according to the degree of congestion. (two threshold: minth, maxth) abeQ<minth:don’t drop packets Minth<abeQ<maxth: drop packets in P abeQ>maxth: drop all packets Drawback: drop packets sometimes when queue isn’t full • ECN(Explicit Congestion Notification)[RFC3168] An method to use multibit feed-back notifying congestion instead of dropping packets

  24. ECN • Routers or Switches must support it.(ECN-capable) • Set two bits by the ECN field in the IP packet header • ECT (ECN-Capable Transport): set by sender, To display the sender’s transmission protocol whether support ECN or not. • CE(Congestion Experienced): set by routers or switches, to display whether congestion occur or not. • Set two bits field in TCP header • ECN-Echo: receiver notify sender that it has received CE packet • CWR(Congestion Window Red-UCed): sender notify receiver that it has decreased the congestion window Integrate ECN with RED

  25. ECN working principle ETC CE ETC CE 1 0 1 1 IP头部 1 2 0 0 TCP头部 CWR CWR ACK 3 1 1 TCP头部 4 ECN-Echo CWR 路由器 目的端 源端

  26. Review: The TCP/ECN Control Loop Sender 1 ECN = Explicit Congestion Notification ECN Mark (1 bit) Receiver Sender 2

  27. Two Key Ideas • React in proportion to the extent of congestion, not its presence. • Reduces variancein sending rates, lowering queuing requirements. • Mark based on instantaneous queue length. • Fast feedback to better deal with bursts.

  28. Data Center TCP Algorithm B K Don’t Mark Mark Switch side: • Mark packets when Queue Length > K. • Sender side: • Maintain an estimate of fractionof packets marked (α). • In each RTT: • where F is the fraction of packets that were marked in the last window of data • 0 < g < 1 is the weight given to new samples against the past in the estimation of α • Adaptive window decreases:

  29. DCTCP in Action (Kbytes) Setup: Win 7, Broadcom 1Gbps Switch Scenario: 2 long-lived flows, K = 30KB

  30. Introduction • Communications in Data Centers • The DCTCP Algorithm • Analysis and Experiments • Conclusion

  31. Why it Works • High Burst Tolerance • Large buffer headroom → bursts fit. • Aggressive marking→ sources react before packets are dropped. • Low Latency • Small buffer occupancies → low queuing delay. 3. High Throughput • ECN averaging → smooth rate adjustments, cwind low variance.

  32. Analysis Packets sent in this RTT are marked. Window Size W*+1 W* (W*+1)(1-α/2) Time

  33. Analysis • We are interested in computing the following quantities: • The maximum queue size (Qmax) • The amplitude of queue oscillations (A) • The period of oscillations (TC)

  34. Analysis • Consider N infinitely long-lived flows with identical round-trip times RTT, sharing a single bottleneck link of capacity C. We further assume that the N flows are synchronized • The queue size at time t is given by Q(t) = NW(t)-C×RTT (3) where W(t) is the window size of a single source • The fraction of marked packets α • S(W1,W2) denote the number of packets sent by the sender, while its window size increases from W1 to W2 > W1. • Since this takes W2-W1 round-trip times, during which the average window size is (W1 +W2)/2

  35. Analysis • LetW* = (C × RTT +K)/N, This is the critical window size at which the queue size reaches K, and the switch starts marking packets with the CE codepoint. During the RTT it takes for the sender to react to these marks, its window size increases by one more packet, reaching W* + 1. Hence Plugging (4) into (5) and rearranging, we get: • Assuming α is small, this can be simplified as . We can now compute A , TC and in Qmax.

  36. Analysis • Note that the amplitude of oscillation in window size of a single flow, D, is given by: • Since there are N flows in total • Finally, using (3), we have:

  37. Analysis • Marking Threshold(K). The minimum value of the queue occupancy sawtooth is given by: • Choose K so that this minimum is larger than zero, i.e. the queue does not underflow. This results in: • Estimation Gain(g). The estimation gain g must be chosen small enough to ensure the exponential moving average (1) “spans” at least one congestion event. Since a congestion event occurs every TC round-trip times, we choose g such that: • Plugging in (9) with the worst case value N = 1, results in the • following criterion: • How do we set the DCTCP parameters?

  38. Evaluation • Implemented in Windows stack. • Real hardware, 1Gbps and 10Gbps experiments • 90 server testbed • Broadcom Triumph 48 1G ports – 4MB shared memory • Cisco Cat4948 48 1G ports – 16MB shared memory • Broadcom Scorpion 24 10G ports – 4MB shared memory • Numerous benchmarks –Incast – Queue Buildup –Buffer Pressure

  39. Experiment implement • 45 1G servers connected to a Triumph, a 10G server extern connection • 1Gbps links K=20 • 10Gbps link K=65 • Generate query, and background traffic • 10 minutes, 200,000 background, 188,000 queries • Metric: • Flow completion time for queries and background flows. We use RTOmin = 10ms for both TCP & DCTCP.

  40. Baseline Background Flows(95th Percentile) Query Flows

  41. Baseline Background Flows(95th Percentile) Query Flows • Low latency for short flows.

  42. Baseline Background Flows(95th Percentile) Query Flows • Low latency for short flows. • High burst tolerance for query flows.

  43. (95th Percentile)Scaled Background & Query10x Background, 10x Query

  44. These results make three key points • First, if our data center used DCTCP it could handle 10X larger query responses and 10X larger background flows while performing better than it does with TCP today. • Second, while using deep buffered switches (without ECN) improves performance of query traffic, it makes performance of short transfers worse, due to queue build up. • Third, while RED improves performance of short transfers, it does not improve the performance of query traffic, due to queue length variability.

  45. Conclusions • DCTCP satisfies all our requirements for Data Center packet transport. • Handles bursts well • Keeps queuing delays low • Achieves high throughput • Features: • Very simple change to TCP and a single switch parameter K. • Based on ECN mechanisms already available in commodity switch.

  46. Thanks! Any questions?

More Related