Modeling tcp vegas under on off traffic l.jpg
This presentation is the property of its rightful owner.
Sponsored Links
1 / 30

Modeling TCP-Vegas under On/Off traffic PowerPoint PPT Presentation


  • 241 Views
  • Uploaded on
  • Presentation posted in: General

Modeling TCP-Vegas under On/Off traffic. Fifth Workshop on MAthematical performance Modeling and Analysis (MAMA) San Diego, June 10-11, 2003. Talk by Jörgen Olsén Joint work with Adam Wierman and Takayuki Osogami. Goal of paper.

Download Presentation

Modeling TCP-Vegas under On/Off traffic

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript


Modeling tcp vegas under on off traffic l.jpg

Modeling TCP-Vegas under On/Off traffic

Fifth Workshop on MAthematical performance Modeling and Analysis (MAMA)

San Diego, June 10-11, 2003

Talk by Jörgen Olsén

Joint work with Adam Wierman and Takayuki Osogami


Goal of paper l.jpg

Goal of paper

  • Model multiple TCP-Vegas sources sending On/Off traffic

  • Predict TCP-network operating point in a bottleneck network

    • link utilization

    • per source throughput

    • per source goodput

    • loss rate

  • Only use network topology and statistical traffic characteristics as input


Main contributions l.jpg

Main contributions

  • Model of TCP-Vegas On/Off traffic sources that includes packet loss and network delay

  • Accurate predictions of network operating point for TCP-Vegas On/Off sources from primary network parameters

  • Framework allows easy comparisons of different TCP flavors within the same network

  • Model validated withhigh accuracy against ns-simulations


Tcp vegas mechanisms l.jpg

TCP-Vegas - mechanisms

Slow-start

  • Slow-start

  • If no packets are lost, the window size is doubled every second RTT until slow-start threshold is reached (Reno doubles every single RTT)

  • Vegas queries delay. Exits slow-start early if delay too high. (Reno only uses slow-start threshold)


Tcp vegas mechanisms5 l.jpg

TCP-Vegas - mechanisms

Congestion avoidance

  • Congestion avoidance

  • Vegas strives to avoid loss by adjusting the congestion window in response to observed network delay (Reno only uses loss to infer congestion)

  • Little delay  window increased by 1 packet

  • Moderate delay  window size maintained

  • Much delay  window decreased by 1 packet


Tcp vegas mechanisms6 l.jpg

TCP-Vegas - mechanisms

Fast Retransmit / Fast Recovery

  • Fast Retransmit / Fast Recovery

  • If loss still occurs Vegas implements Reno-like Fast Retransmit / Fast Recovery to recover from “moderate” loss

  • Window size adjusted W  3W/4 (Reno adjusts to W/2)


Tcp vegas mechanisms7 l.jpg

TCP-Vegas - mechanisms

Timeout

  • Timeout

  • If less than three duplicate ACKs are received Vegas times out

  • After timeout window reset to initial size and slow-start occurs

  • Consecutive timeouts doubles timeout length (exponential back-off)


High level methodology l.jpg

High-level methodology

Aggregated load l

Network

TCP Source

Loss and Delay

Separate the models of the TCP source and the network and allow interaction via feedback

  • Fundamental relationship:

  • Network loss and delay depends on load

  • TCP-Vegas adjusts load in response to observed loss and delay

  • Use a fixed-point method


Tcp transport level model l.jpg

TCP Transport level model

Assume the network model has delivered

  • PW( k ): probability of dropping k packets among W sent

  • P( Nb ≤ j ): probability for ≤ j packets in bottleneck queue from each source

  • RTT: Propagation delay + Queuing delay

    Then,

  • Derive the TCP Markov chain and transition rates

  • Stationary solution to Markov chain determines throughput as function of packet loss rate and network delay


  • Tcp transport level model10 l.jpg

    TCP Transport level model

    • Continuous time Markov chain model of the source in busy state

    • A state consists of:

    • Current window size

    • Slow start threshold (Wt)

    • Active or loss recovery phase?

    • Transition rate depends on the RTT and packet loss rate.


    Tcp vegas markov chain l.jpg

    TCP-Vegas Markov Chain

    Slow-start – transition rates

    Below delay threshold:

    Transition to intermediate state from window size w: Pw(0) P(Nb≤) / RTT

    From intermediate state to doubling: Pw(0) P(Nb≤) / RTT

    Above delay threshold(exit to C.A):

    Transition to intermediate state from window size w: Pw(0) P(Nb>) / RTT

    From intermediate to congestion avoidance w+1: Pw(0) P(Nb>) / RTT

    • Slow Start

    • If no packets are lost, the window size is doubled every second RTT.

    • RTT is estimated. If the delay at the network is more than  packets we jump out of slow start to avoid loss.

    • Slow Start

    • If no packets are lost, the window size is doubled every second RTT.


    Tcp vegas markov chain12 l.jpg

    TCP-Vegas Markov Chain

    Congestion Avoidance – transition rates

    If no packets are lost:

    Increase:P(Nb<) Pw(0) / RTT

    Decrease:P(Nb>) Pw(0) / RTT

    Stay:P( <Nb< ) Pw(0) / RTT

    • Congestion Avoidance

    • If no packets are lost:

    • If the delay at the network is more than β packets we decrease our window by 1.

    • If the delay at the network is less than α packets we increase our window by 1.

    • Otherwise we maintain our window size.


    Tcp vegas markov chain13 l.jpg

    TCP-Vegas Markov Chain

    Fast retransmit – transition rates

    To fast retransmit:

    Pfr/fr / RTT

    From fast retransmit back to congestion avoidance: 1 / RTT

    • Fast retransmit

    • If packets are lost but atleast 3 duplicate ACKs are received we fast retransmit.

    • Drop window by 1/4.

    • Fast retransmit probability Pfr/fr(w) quantified for different window sizes using “A Simulation based study” Fall & Floyd 96


    Tcp vegas markov chain14 l.jpg

    TCP-Vegas Markov Chain

    • Timeout

    • If too few duplicate ACKs are received Vegas times out

    • Timeout length T=RTT+4. Window set to inital winsize followed by slow-start

    • Timeout probability Pto quantified for different window sizes using “A Simulation based study” Fall & Floyd 96

    Timeout – transition rates

    To timeout:

    Pto(w) / RTT

    From timeout back to slow-start:

    P1(0) / T


    Tcp vegas markov chain15 l.jpg

    TCP-Vegas Markov Chain

    Exponential backoff – transition rates

    From exponential backoff state k to k+1:

    [1-P1(0)] / 2kT

    From exponential backoff k back to slow-start:

    P1(0) / 2kT

    • Exponential backoff

    • If the first resent packet after a timeout is lost, the next timeout doubles in length to 2T

    • This exponential backoff continues up to maximum 64 T


    Modeling the sources l.jpg

    Modeling the sources

    Application web On/Off traffic

    Transport TCP mechanisms

    Network

    Link

    Physical

    Our TCP-Vegas source model mimics the structure of a network stack:


    Application level model l.jpg

    Application level model

    • On/Off – transition rates

    • Idle to busy:

    • 1 / E[ Toff ]

    • Active states (slow-start, congestion avoidance ) to idle:

    • 1 / E[ Ton ]

    • On/Off model

    • Exponential time in the idle state E[ Toff ]

    • Exponential time in busy state E[ Ton ]

    • Each busy period starts with window size of one packet. Slow-start threshold Wt=Wmax/2

    • Extensions: by increasing the state space

    • Arbitrary distributions can be approximated by hyper-exponential

    • File size distributions modeled using N = max file size classes


    High level methodology reminder l.jpg

    High-level methodology - reminder

    Aggregated load l

    Network

    TCP Source

    Loss and Delay

    Separate the models of the TCP source and the network and allow interaction via feedback


    Modeling the network l.jpg

    Modeling the network

    • We model the network using a single bottleneck link:

    • Queueing model to output the loss rate and delay distribution on the link given the throughput coming into the link.

    Offered traffic from TCP sources

    Server speed is the speed of the bottleneck link

    Buffer size of the bottleneck link


    Modeling the network20 l.jpg

    Modeling the network

    • We will model the network using a single bottleneck link:

    • Queueing model to output the loss rate and delay distribution on the link given the throughput coming into the link.

    • But how should the bottleneck

    • link be modeled?

    • M/M/1/B

    • Mr/M/1/B

    • M/D/1/B

    Offered traffic from TCP sources

    Server speed is the speed of the bottleneck link

    Buffer size of the bottleneck link


    Fixed point methodology l.jpg

    Fixed-point methodology

    Aggregated load l

    TCP Source

    Network

    l* - average load

    • Stationary distribution to Markov Chain pi

    • Per source load li = S wipi

    • Aggregate load from N sources l = Sli

    • Packet loss rate p

    • Delay distribution Dq

    p* - average loss rate

    Loss and Delay

    Find fixed-point (l*,p*, Dq*)

    {

    l*

    = f(p*,Dq*)

    (p*,Dq*)

    = g(l*)


    Validation 100 sources on off 5 1 5 sec l.jpg

    Validation: 100 sources, On/Off=5/1.5 sec.


    Related work l.jpg

    Related work

    Single source model - renewal theory model for TCP

    • Samios and Vernon ”Modeling the throughput of TCP-Vegas”, Sigmetrics, June 2003.

      Fixed-point methods - Markov Chain model for TCP

    • Casetti and Meo ”A new approach to model the stationary behavior of TCP connections”, INFOCOMM, March 2000.

    • Casetti and Meo ”An analytical framework for the performance evaluation of TCP Reno connections”, Computer Networks 37, 2001.

    • Wierman, Osogami, Olsén, ”A unified framework for modeling TCP-Vegas, TCP-SACK, and TCP-Reno”, Technical report, May 2003.


    Related work24 l.jpg

    Related work

    Fixed-point methods – square root of p-law for TCP and multiple bottlenecks

    • Gibbens et al. ”Fixed-point models for the end-to-end performance analysis of IP networks”, 13th ITC Special Seminar, Sep 2000.

    • Bu and Towsley ”Fixed point approximations for TCP behavior in an AQM network”, Sigmetrics, June2001.

    • Firoiu, Yeom, Zhang, ”A framework for practical performance evaluation and traffic engineering in IP networks”, IEEE ICT, June 2001.


    Contributions l.jpg

    Contributions

    A step forward in the modeling of TCP-Vegas

    • Predicts operating point for Vegas in bottleneck network (Most previous work on Vegas have considered single sources)

    • Allows modeling loss, and Vegas delay sensitive slow-start & congestion avoidance phases. (Few previous analyses have allowed loss)

    • On/Off traffic (Previous models have focused on bulk transfer)

      Showed the extensibility of the framework

    • Added two new TCP flavors: Vegas and SACK, and extended Reno

    • Showed the plug and play of both network and source models by analyzing M/M/1/B, M/D/1/B and Mr/M/1/B queuing models


    Extensions l.jpg

    Extensions

    The Source Model is extensible

    • Other application models (arbitrary On/Off time distributions and arbitrary file sizes)

    • Other flavors of TCP (modify Markov Chain)

      The Network Model is extensible

    • Wireless, DiffServ,…

    • Multiple-bottleneck networks

    • Multiple types of heterogeneous TCP sources


    Slide27 l.jpg

    Questions?


    Backup slides l.jpg

    Backup slides


    Vegas delay estimation l.jpg

    Vegas delay estimation

    Estimated throughput

    Number of back-logged packets

    Approximate by

    Queuing delay

    Every source tries to keep a≤Nb ≤ b packets backlogged in the network


    Network performance metrics l.jpg

    Network performance metrics

    M/M/1/B queuing model:


  • Login