1 / 31

Edge-based Traffic Management Building Blocks

I. E. Logical FIFO. B. I. E. E. I. Edge-based Traffic Management Building Blocks. David Harrison, Shiv Kalyanaraman, Sthanu Ramakrishnan Rensselaer Polytechnic Institute shivkuma@ecse.rpi.edu http://www.ecse.rpi.edu/Homepages/shivkuma. Overview. Private Networks vs Public Networks

lala
Download Presentation

Edge-based Traffic Management Building Blocks

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. I E Logical FIFO B I E E I Edge-based Traffic Management Building Blocks David Harrison, Shiv Kalyanaraman, Sthanu Ramakrishnan Rensselaer Polytechnic Institute shivkuma@ecse.rpi.edu http://www.ecse.rpi.edu/Homepages/shivkuma

  2. Overview • Private Networks vs Public Networks • QoS vs Congestion Control: the middle ground ? • Overlay Bandwidth Services: • Key: deployment advantages • A closed-loop QoS building block • Services: Better best-effort services, Assured services, Quasi-leased lines…

  3. Motivation: Site-to-Site VPN Over a Multi-Provider Internetwork International Link or International Link or

  4. Private networks over Public networks • Can we reduce (not eliminate !) coordination requirementsfor QoS deployment? • Tolerate heterogeneity • Incremental deployment • Faster deployment cycles • Dynamically provisioned services • Complexity Issues: • Design: int-serv, RSVP, RTP … • Implementation: diff-serv, CSFQ… • Upgrades • Configuration • Management Focus of this talk!

  5. Router Router Router Workstation Router Problem: Inter-domain QoS Deployment Complexity • Today’s solutions require upgrade of multiple potential bottlenecks, and complex multi-provider coordination Internetwork or WAN Router Workstation • Solutions: • Enable incrementally deployableedge-based QoS • New closed-loop building blocks for efficiency • Reduce (not eliminate!) coordination reqts. No upgrades! • Tradeoffs: limited service spectrum

  6. I E Logical FIFO B I E E I New: Closed-loop control ! Policy/ Bandwidth Broker Our Model: Edge-based building blocks Model: Inspired by diff-serv; Aim: further interior simplification

  7. Priority/WFQ FIFO B  B • Scheduler: differentiates service on a packet-by-packet basis • Loops: differentiate service on an RTT-by-RTT basis using edge-based policy configuration. Closed-loop BB: Take-Home Ideas

  8. Queuing Behavior: Without Closed-loop Control Bottleneck queue End system

  9. Queuing Behavior: With Overlay Edge-Edge Control edge devices Results: efficient core operation, rate adaptation in O(RTT)

  10. Edge-based Performance Customization • Key idea: bottlenecks consolidated at edges, closer to application => incorporate application-awareness in QoS • Eg: L4-L7 aware buffer management. • For TCP traffic: dramatically reduce timeouts: • Do not drop retransmissions or small window packets • Potential: application-level QoS services, active networking, edge-based diff-serv PHB emulation etc

  11. Closed-loop Building Block Reqts #1. Edge-to-edge overlay operation, #2. Robust stability #3. Bounded-buffer/zero-loss, #4. Minimal configuration/upgrades + incremental deployment #5. Rate-based operation: for bandwidth services • Not available in any congestion control scheme… • Related work: NETBLT, TCP Vegas, Mo/Walrand, ATM Rate/Credit approaches

  12. 1 . . . n 1 . . . n   i Overlay Control: Concepts • Load: =  i ; Capacity:  ; Output Rates: i • At all times:  >= i (single bottleneck) • During congestion epochs, set i < i, Eg:i = min{i , i} • Single bottleneck: Reverse queue growth within 1 RTT • Key:detect congestion: • a) purely at the edges => overlay technology • b) detect in a loss-less manner!

  13. Interior Node (modeled to identify congestion epochs) Egress Edge (feeds back measured rate iduring congestion epochs) Ingress Edge (Shapes edge-to-edge aggregate at rate i) Implementation model • Overlay state: • Edge-to-edge VL association at ingress and egress • One token-bucket shaper (LBAP) per VL loop: • Rate = i(t) ; burstiness:  ri(t)  End-to-end traffic i(t) Ingress shaper

  14. Congestion Detection: Hypothetical Model • Mark all packets if the interior queue crosses N • N is upper bound on transient burstiness during underload (I.e. when  i<) • If any marked packets seen by egress edge during measurement interval , iis fed back: begin epoch • If marked packets are not seen for a full interval , declare end of epoch and stop feedback of i. Interior Node (helps identify congestion epochs) Egress Edge (feeds back measured rate iduring congestion epochs) Ingress Edge (Shapes edge-to-edge aggregate at rate i)

  15. Impln: Overlay Congestion Detection • Emulate prior model, albeit without Interior assist • N: aggregate burstiness bound • : per-VL accumulation bound. • Congestion epoch beginning: • Measure per-VL accumulation qi = (ii) • Per-VL accumulation qi exceeds 2 => epoch begins • Congestion epoch end: • qi <=  epoch ends • Hysteresis helps ensure that queue drains

  16. Increase/Decrease Dynamics • Increase: • Additive increase of 1 pkt/every interval ( >= RTT) • Decrease: • i= min{i, i} every interval during the congestion epoch • Properties (single bottleneck): • queue guaranteed to reduce within of feedback • The lower rate is held till queue is drained Input Rate Dynamics i i  can be set larger than 0.5 time

  17. Multi-bottleneck stability • Incremental drain provisioned for incremental accumulation • Sum of input rates upper bounded; output rates lower bounded

  18. Multiple Bottleneck Fairness Throughput versus Number of Bottlenecks Min-potential delay fairness Edge-to-edge Control Proportional fairness Linear Network: 1 flow (VL) crosses k bottlenecks. Each bottleneck has 4 cross flows (VLs).

  19. Overlay Bandwidth Services • Basic Services: no admission control • “Better” best-effort services • Denial-of-service attack isolation support • Weighted proportional/priority services • Advanced services: edge-based admission control • Assured service emulation • “Quasi-leased-line” service • Key: no upgrades; only configuration reqts…

  20. Scalable Best-effort TCP Service

  21. Isolation of Denial of Service/Flooding TCP starting at 0.0s UDP flood starting at 5.0s

  22. r + D r = min(r, bASm, bBE(m-a)+a) if no congestion if congestion 1 > bAS > bBE >> 0 Edge-based Assured Service Emulation • BackoffDifferentiation Policy: • Backoff little (bas) when below assurance (a), • Backoff (bas) same as best effort when above assurance (a) • Backoff differentiation quicker than increase differentiation • Service could be potentially oversubscribed (like frame-relay) • Unsatisfied assurances just use heavier weight.

  23. Bandwidth Assurances Flow 1 with 4 Mbps assured + 3 Mbps best effort Flow 2 with 3 Mbps best effort

  24. if no congestion r + D r = max(a, bBE(m-a)+a) if congestion 1 > bBE >> 0 Quasi-Leased Line (QLL) • Assume admission control and route-pinning (MPLS LSPs). • Provide bandwidth guarantee. • Key: No delay or jitter guarantees! • Adaptation in O(RTT) timescales • Average delay can be managed by limiting total and per-VL allocations (managed delay) • Policy:

  25. Best-effort VL starts at t=0 and fully utilizes 100 Mbps bottleneck. Background QLL starts with rate 50Mbps Best-effort VL quickly adapts to new rate. Quasi-Leased Line Example Best-effort rate limit versus time

  26. Starting QLL incurs backlog. Unlike TCP, VL traffic trunks backoff without requiring loss and without bottleneck assistance. Quasi-Leased Line Example (cont) Bottleneck queue versus time Requires more buffers: larger max queue

  27. q < b 1-b Quasi-Leased Line (cont.) Worst-case queue vs Fraction of capacity for QLLs Single bottleneck analysis: B/w-delay products For b=.5, q=1 bw-rtt Simulated QLL w/ edge-to-edge control.

  28. Signaling/Configuration Issues • Simple: Each edge-box independently sets up loops only with other edges it intends to communicate • Address-prefix list based configuration for VPN application • Minimal overhead to maintain the loop: a leaky bucket, 8-bytes every 250 ms or so of overhead • ISP configures ONE separate class at potential bottlenecks for overlay controlled traffic • Scalable to inter-domain VPNs as long as each edge does not have to manage > 100s of loops • Properties: Bounded scalability, simplified interior configuration, incremental deployment, simple set of overlay services.

  29. Edge-to-Edge Principle ? • Tradeoff between public and private network philosophies: • Private network characteristics: • Differentiated Svcs, simple forms of overlay QoS • Bounded scalability and heterogeneity • Edge-to-edge loops, queue bounds, policy/BB scalability, bridging approach to inter-domain QoS • Public network characteristics: • Incremental deployment. O(1) complexity. • Stateless interior inter-network • Minimal interior upgrades, configuration support. • Use of robust, stable closed-loop control for efficiency and adaptation in O(RTT) timescales.

  30. Current Work • With bottlenecks consolidated at the edge: • What diff-serv PHBs or remote scheduler functionalities can be emulated from the edge ? • What is the impact of congestion control properties and rate of convergence on attainable set of services ? • Areas: • Application-level QoS: edge-to-end problem • Dynamic (short-term) services • Congestion-sensitive pricing: congestion info at the edge • Edge-based contracting/bidding frameworks • Point-to-set svcs: more economic value than pt-to-pt svcs • Dynamic provisioning for statistical muxing gains

  31. Summary • Private Networks vs Public Networks • QoS vs Congestion Control vsThrowing bandwidth • QoS DeploymentL • Simplified overlay QoS architecture • Intangibles: deployment, configuration advantages • Edge-based Building Blocks & Overlay services: • A closed-loop QoS building block • Basic services, advanced services

More Related