slide1 n.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
Microsoft Research Stanford University PowerPoint Presentation
Download Presentation
Microsoft Research Stanford University

Loading in 2 Seconds...

play fullscreen
1 / 32

Microsoft Research Stanford University - PowerPoint PPT Presentation


  • 171 Views
  • Uploaded on

Data Center TCP (DCTCP). Mohammad Alizadeh , Albert Greenberg, David A. Maltz , Jitendra Padhye Parveen Patel, Balaji Prabhakar , Sudipta Sengupta , Murari Sridharan Presented by Shaddi Hasan. Microsoft Research Stanford University. Case Study: Microsoft Bing.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'Microsoft Research Stanford University' - brier


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
slide1

Data Center TCP

(DCTCP)

Mohammad Alizadeh, Albert Greenberg, David A. Maltz, JitendraPadhyeParveen Patel, BalajiPrabhakar, SudiptaSengupta, MurariSridharan

Presented by ShaddiHasan

Microsoft Research Stanford University

case study microsoft bing
Case Study: Microsoft Bing
  • Measurements from 6000 server production cluster
  • Instrumentation passively collects logs
    • Application-level
    • Socket-level
    • Selected packet-level
  • More than 150TB of compressed data over a month
partition aggregate application structure
Partition/Aggregate Application Structure

Deadline = 250ms

MLA

MLA

TLA

Picasso

  • Time is money
    • Strict deadlines (SLAs)
  • Missed deadline
    • Lower quality result

………

1. Art is a lie…

1.

1.

Deadline = 50ms

2. The chief…

2. Art is a lie…

2.

Art is…

…..

3.

…..

…..

3.

3.

Picasso

“I'd like to live as a poor man

with lots of money.“

“The chief enemy of creativity is good sense.“

“Computers are useless.

They can only give you answers.”

“Bad artists copy.

Good artists steal.”

“Art is a lie that makes us

realize the truth.

“It is your work in life that is the ultimate seduction.“

“Inspiration does exist,

but it must find you working.”

“Everything you can imagine is real.”

Deadline = 10ms

Worker Nodes

workloads
Workloads
  • Partition/Aggregate

(Query)

  • Short messages [50KB-1MB]

(Coordination, Control state)

  • Large flows [1MB-50MB]

(Data update)

Delay-sensitive

Delay-sensitive

Throughput-sensitive

impairments
Impairments
  • Incast
  • Queue Buildup
  • Buffer Pressure
incast
Incast

Worker 1

  • Synchronized mice collide.
    • Caused by Partition/Aggregate.

Aggregator

Worker 2

Worker 3

RTOmin = 300 ms

Worker 4

TCP timeout

incast really happens
Incast Really Happens
  • Requests are jittered over 10ms window.
  • Jittering switched off around 8:30 am.

MLA Query Completion Time (ms)

99.9th percentile is being tracked.

Jittering trades off median against high percentiles.

queue buildup
Queue Buildup

Sender 1

  • Big flows buildup queues.
    • Increased latency for short flows.

Receiver

Sender 2

  • Measurements in Bing cluster
    • For 90% packets: RTT < 1ms
    • For 10% packets: 1ms < RTT < 15ms
data center transport requirements
Data Center Transport Requirements
  • High Burst Tolerance
    • Incast due to Partition/Aggregate is common.
  • Low Latency
    • Short flows, queries
  • 3. High Throughput
    • Continuous data updates, large file transfers

The challenge is to achieve these three together.

tension between requirements
Tension Between Requirements

High Throughput

Low Latency

High Burst Tolerance

  • Deep Buffers:
  • Queuing Delays
  • Increase Latency
  • Shallow Buffers:
  • Bad for Bursts &
  • Throughput

Objective:

Low Queue Occupancy & High Throughput

DCTCP

  • AQM – RED:
  • Avg Queue Not Fast
  • Enough for Incast
  • Reduced RTOmin (SIGCOMM ‘09)
  • Doesn’t Help Latency
nugget time
Nugget time
  • TCP causes problems because it tries to fill buffers
    • Congestion is signaled by loss  buffer overflow.
  • Use ECN to signal congestion, not loss
    • Keeps buffers short, reduces retransmissions
review the tcp ecn control loop
Review: The TCP/ECN Control Loop

Sender 1

ECN = Explicit Congestion Notification

ECN Mark (1 bit)

Receiver

Sender 2

small queues tcp throughput the buffer sizing story
Small Queues & TCP Throughput:The Buffer Sizing Story
  • Bandwidth-delay product rule of thumb:
    • A single flow needs buffers for 100% Throughput.

Cwnd

Buffer Size

B

Throughput

100%

small queues tcp throughput the buffer sizing story1
Small Queues & TCP Throughput:The Buffer Sizing Story
  • Bandwidth-delay product rule of thumb:
    • A single flow needs buffers for 100% Throughput.
  • Appenzellerrule of thumb (SIGCOMM ‘04):
    • Large # of flows: is enough.

Cwnd

Buffer Size

B

Throughput

100%

small queues tcp throughput the buffer sizing story2
Small Queues & TCP Throughput:The Buffer Sizing Story
  • Bandwidth-delay product rule of thumb:
    • A single flow needs buffers for 100% Throughput.
  • Appenzellerrule of thumb (SIGCOMM ‘04):
    • Large # of flows: is enough.
  • Can’t rely on stat-mux benefit in the DC.
    • Measurements show typically 1-2 big flows at each server, at most 4.
small queues tcp throughput the buffer sizing story3
Small Queues & TCP Throughput:The Buffer Sizing Story
  • Bandwidth-delay product rule of thumb:
    • A single flow needs buffers for 100% Throughput.
  • Appenzellerrule of thumb (SIGCOMM ‘04):
    • Large # of flows: is enough.
  • Can’t rely on stat-mux benefit in the DC.
    • Measurements show typically 1-2 big flows at each server, at most 4.

Real Rule of Thumb:

Low Variance in Sending Rate → Small Buffers Suffice

B

two key ideas
Two Key Ideas
  • React in proportion to the extent of congestion, not its presence.
    • Reduces variancein sending rates, lowering queuing requirements.
  • Mark based on instantaneous queue length.
    • Fast feedback to better deal with bursts.
data center tcp algorithm
Data Center TCP Algorithm

B

K

Don’t

Mark

Mark

Switch side:

  • Mark packets when Queue Length > K.
    • Sender side:
    • Maintain running average of fractionof packets marked (α).
    • In each RTT:
  • Adaptive window decreases:
    • Note: decrease factor between 1 and 2.
dctcp in action
DCTCP in Action

(Kbytes)

Setup: Win 7, Broadcom 1Gbps Switch

Scenario: 2 long-lived flows, K = 30KB

why it works
Why it Works
  • High Burst Tolerance
    • Large buffer headroom → bursts fit.
    • Aggressive marking→sources react before packets are dropped.
  • Low Latency
    • Small buffer occupancies → low queuing delay.

3. High Throughput

    • ECN averaging →smooth rate adjustments, low variance.
baseline
Baseline

Background Flows

Query Flows

baseline1
Baseline

Background Flows

Query Flows

  • Low latency for short flows.
baseline2
Baseline

Background Flows

Query Flows

  • Low latency for short flows.
  • High throughput for long flows.
baseline3
Baseline

Background Flows

Query Flows

  • Low latency for short flows.
  • High throughput for long flows.
  • High burst tolerance for query flows.
conclusions
Conclusions
  • DCTCP satisfies all our requirements for Data Center packet transport.
    • Handles bursts well
    • Keeps queuing delays low
    • Achieves high throughput
  • Features:
    • Very simple change to TCP and a single switch parameter.
    • Based on mechanisms already available in Silicon.
thoughts
Thoughts
  • Convergence times?
  • Actually requires persistent connections
    • Section 4.2.2: “All communication is over long-lived connections, so there is no three-way handshake for each request.”
  • TCP will blow DCTCP away if sharing a buffer
thoughts1
Thoughts
  • This is basically the right answer for handling incast and meeting deadlines
    • Use ECN to keep queues short, rather than filling up your buffers and waiting for loss.
    • D3 incorporates explicit knowledge of deadlines, but this complicates API and necessitates more changes.
    • Complementary to MPTCP (or any other TCP for that matter)
thoughts2
Thoughts
  • “The key contribution here is […] the act of deriving multi-bit feedback from the information present in the single-bit sequence of marks.”
    • Calculating a moving average is not a contribution.
    • Using ECN + clever scheme for adjusting CWIN is!
analysis
Analysis
  • How low can DCTCP maintain queues without loss of throughput?
  • How do we set the DCTCP parameters?
  • Need to quantify queue size oscillations (Stability).

85% Less Buffer than TCP