1 / 36

End-2-End QoS Internet

End-2-End QoS Internet. Presented by: Zvi Rosberg 3 Dec, 2007 Caltech Seminar. What is this talk about. The shortcoming of QoS support in current Internet A novel holistic Rate Management Protocol A new scalable QoS guarantee architecture

erica
Download Presentation

End-2-End QoS Internet

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. End-2-End QoS Internet Presented by: Zvi Rosberg 3 Dec, 2007 Caltech Seminar

  2. What is this talk about • The shortcoming of QoS support in current Internet • A novel holistic Rate Management Protocol • A new scalable QoS guarantee architecture • The theoretical foundation of our architecture • How TCP window flow control may adapt in the presence of our network layer RMP • Another E-2-E prioritized Delay/Loss RMP

  3. Motivation • Shortcoming of current QoS architecture • Beside being immature and requiring horrendous configuration, current QoS also has… • Fundamental inhibitors: • Scalability for real QoS guarantee (IntServ and Cisco’s “IntServ over DiffServ”) • No bandwidth nor E2E delay guarantee when using a scalable configuration of DiffServ

  4. So what are we doing about it ? • We are implementing a prototype on Network Processors (NPU) addressing the current QoS issues - The architecture is • Scalable and has bandwidth, loss and E2E delay guarantee • Adaptive - so configuration is minimized • Allocates the residual bandwidth fairly • The NPUs execute a new IP layer protocol that router’s should run in the future

  5. The Architecture

  6. The Key Elements of our solution Provides Services to Management functions in the Edge Routers RMP RMP RMP Services RMP Novel Rate Management Protocol (RMP) for Multi-Service Flows RMP RMP Services RMP Services Runs in Edge & Core Routers at IP layer RMP

  7. Architectural Components Control Plane Data Plane Classification/Marking at Edge Routers Scalable Bandwidth Reservation Protocol Rate Policingin the Edge Priority Packet Scheduling in Routers Admission Control QoS Fair Rate Calculation RMP Performance Probing Link Penalties Gathering

  8. Theoretical Foundation

  9. Our Theoretical Contribution • Extending Fairness beyond “best-effort” service • Extending the primal-dual iterative distributed algorithm (used by Kelly) for rate allocation with • Rate and delay constraints • Priority packet scheduling • Revisit TCP flow control when rate is controlled by the network layer • An aside question is: Why priority scheduling? • It improves link utilization – delay-sensitive packets will not have to wait for delay-insensitive packets, so we can have more from the delay-insensitive packets

  10. Fairness with Best-effort • - proportional fairness is equivalent to the solution of: as long as X is convex

  11. Fairness with QoS • A natural way to extend the best effort fairness is to add the QoS requirements to the constraints and … • … optimize on the residual link capacities

  12. Flow rates of prio 1,2…,m traversing each link maximum loss and delay constraints minimum bandwidth constraints Fairness with QoS (Cont.) • Since X is convex – proportional fairness follows

  13. Fairness with QoS (Cont.) • The delay/loss constraints are NOT EXPLICIT – they are attained by an outer-loop control of

  14. Primal-dual iterative distributed algorithm extension • The fair residual rates, , are computed iteratively after a reduction to residual link capacities, , given by • … which is made possible by our scalable reservation protocol • The policed rate of flow is then

  15. The Rate Management Protocol (RMP) • Route penalty of flow i • Link capacity reduced by utilization upper bound per priority class m • Adaptively set from sources based on RTT and Loss probing • Total rate of flows from priorities 1,..,m on link n on unreserved link capacity • In each router output link n and priority m :

  16. Stability Proof • To prove stability with fixed • We redefine the routing matrix, , to include one virtual link for each priority class • Flows with priority m use all virtual links having priorities m along their original path • The redefined problem is a single class problem equivalent to the priority problem • After this reduction, stability follows by Kelly’s results

  17. Stability Proof (cont.) • To prove stability with adaptive • “Unhappy” flow sources (having excessive delay/losses) signal it in their RMP packets • Congested links decrease the respective • To prove convergence, we allow only to decrease • In practice, convergence is observed also when are also increased when flow sources are “too-happy”

  18. TCP Flow Control - Revisited

  19. TCP Flow Control Revaluation • Once RMP is in place, TCP flow control needs a revaluation • The RMP of the core network will take care of fair rate calculation and congestion avoidance • RMP will also signal end applications about their current target rates, and then… • TCP could be extended beyond “best-effort” • Given rate, , TCP can achieve it with a window update of the form:

  20. Performance Evaluation • We showed that assuming linear scalability, the window flow control converges to a unique stable state under totally asynchronous updates • linear scalability: Total number of bytes queued in each link scales up linearly with the window size • It is an average flow property of the flows crossing a given link, rather a per-flow property • Plausible for large networks • Stability was also verified by simulation • In the fluid model of [Mo & Walrand] used to relate rate and windows, linear scalability is implied

  21. TCP Flow Control Comparison Epoch ISP Network, USA # core links: 74 (37 full-duplex) # flows: 512 # access links: 512 core link capa: 1 Gb/s access link capa:0.1 Gb/s

  22. Simulation Method • 2-way TCP flows using fixed shortest paths • ACKs are either piggybacked or pure (statistically) • RTO is estimated according to RFC 2988 (Jacobson Alg) • Duplicate ACKs are triggered if • All TCP flow controls half their window size upon 3-duplicate ACKs and reduce it to 2 MSS upon RTO • Otherwise - Fast TCP adapts its window sizes according

  23. Simulation Method (cont.) • Simulation time is about 3.5 real operational minutes • In every step - window packets are processed in one batch • First, they are arbitrarily distributed between forward and backward paths • Then, the packets that can “fill” the links are in transit • The rest, are distributed between the bottleneck links in proportion to the bottleneck queueing time • Async operation is modelled by i.i.d Bernoulli r.v's determining which of the flows receive an ACK

  24. TCP Flow Control Comparison Our TCP Flow Control (9 typical flows windows)

  25. TCP Flow Control Comparison Fast TCP Flow Control

  26. TCP Flow Control Comparison TCP VegasFlow Control

  27. TCP Flow Control Comparison TCP Reno Flow Control (“Sawtooth”)

  28. Comparison Summary

  29. Flow Control with QoS Support • 3 x 256 2-way TCP connections with 3 priorities • Utilization upper bounds: (0.1, 0.75, 1.0) • Avg total fair rate: 164.30 packets (compared with 492) • Avg Fairness deviation: 5.5%

  30. Simulation with Link Utilization Adaptation • When are adapted based on flow source experienced RTT and Losses (i.e., RTT > RTO), then all QoS requirements are met

  31. Another E2E Delay-Loss Control

  32. Rate Time Derivative in the Fluid Model • We study the following prioritized combined Rate-Delay control problem • clearance time of bits from flows with prio higher/equal p in link l at timet • delay prices for flow i at timet

  33. Delay Time Derivative in the Fluid Model • total rate of flows with priorities less/equal p in link lat timet • The rate control is the gradient search of

  34. Delay Prices Adapting • is learned by the flow source from the RMP packets • … and is adapted if • Adaptation signals must also be disseminated to other relevant sources • …. which is done again with RMP signalling packets

  35. Result Summary • If the routing matrix is full-rank, then • For any e2e delay requirement, there is a unique equilibrium point • The adaptive rate control converges to the stable point from any initial condition Synchronous Fluid Model Time Lag Fluid Model (Rate and Delay effects) • For a single bottleneck case – global stability holds true only if time lag is limited (e.g., ~650 ms) • Emulation – holds true for multiple bottlenecks

  36. Thank You

More Related