Edge based traffic management building blocks
1 / 31

Edge-based Traffic Management Building Blocks - PowerPoint PPT Presentation

  • Uploaded on

I. E. Logical FIFO. B. I. E. E. I. Edge-based Traffic Management Building Blocks. David Harrison, Shiv Kalyanaraman, Sthanu Ramakrishnan Rensselaer Polytechnic Institute [email protected] http://www.ecse.rpi.edu/Homepages/shivkuma. Overview. Private Networks vs Public Networks

I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
Download Presentation

PowerPoint Slideshow about ' Edge-based Traffic Management Building Blocks' - lala

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
Edge based traffic management building blocks



Logical FIFO






Edge-based Traffic Management Building Blocks

David Harrison, Shiv Kalyanaraman, Sthanu Ramakrishnan

Rensselaer Polytechnic Institute

[email protected]



  • Private Networks vs Public Networks

    • QoS vs Congestion Control: the middle ground ?

  • Overlay Bandwidth Services:

    • Key: deployment advantages

    • A closed-loop QoS building block

  • Services: Better best-effort services, Assured services, Quasi-leased lines…

Motivation site to site vpn over a multi provider internetwork
Motivation: Site-to-Site VPN Over a Multi-Provider Internetwork

International Link


International Link


Private networks over public networks
Private networks Internetworkover Public networks

  • Can we reduce (not eliminate !) coordination requirementsfor QoS deployment?

    • Tolerate heterogeneity

    • Incremental deployment

    • Faster deployment cycles

    • Dynamically provisioned services

  • Complexity Issues:

    • Design: int-serv, RSVP, RTP …

    • Implementation: diff-serv, CSFQ…

    • Upgrades

    • Configuration

    • Management

Focus of this talk!

Problem inter domain qos deployment complexity

Router Internetwork





Problem: Inter-domain QoS Deployment Complexity

  • Today’s solutions require upgrade of multiple potential bottlenecks, and complex multi-provider coordination

Internetwork or WAN



  • Solutions:

    • Enable incrementally deployableedge-based QoS

    • New closed-loop building blocks for efficiency

    • Reduce (not eliminate!) coordination reqts. No upgrades!

    • Tradeoffs: limited service spectrum

Our model edge based building blocks

I Internetwork


Logical FIFO






New: Closed-loop control !


Bandwidth Broker

Our Model: Edge-based building blocks

Model: Inspired by diff-serv; Aim: further interior simplification

Closed loop bb take home ideas

Priority/WFQ Internetwork




  • Scheduler: differentiates service on a packet-by-packet basis

  • Loops: differentiate service on an RTT-by-RTT basis using edge-based policy configuration.

Closed-loop BB: Take-Home Ideas

Queuing behavior without closed loop control
Queuing Behavior: Without Closed-loop Control Internetwork



End system

Queuing behavior with overlay edge edge control
Queuing Behavior: With Overlay Edge-Edge Control Internetwork

edge devices

Results: efficient core operation,

rate adaptation in O(RTT)

Edge based performance customization
Edge-based Performance InternetworkCustomization

  • Key idea: bottlenecks consolidated at edges, closer to application => incorporate application-awareness in QoS

  • Eg: L4-L7 aware buffer management.

    • For TCP traffic: dramatically reduce timeouts:

      • Do not drop retransmissions or small window packets

  • Potential: application-level QoS services, active networking, edge-based diff-serv PHB emulation etc

Closed loop building block reqts
Closed-loop Building Block Reqts Internetwork

#1. Edge-to-edge overlay operation,

#2. Robust stability

#3. Bounded-buffer/zero-loss,

#4. Minimal configuration/upgrades + incremental deployment

#5. Rate-based operation: for bandwidth services

  • Not available in any congestion control scheme…

  • Related work: NETBLT, TCP Vegas, Mo/Walrand, ATM Rate/Credit approaches

Overlay control concepts











 i

Overlay Control: Concepts

  • Load: =  i ; Capacity:  ; Output Rates: i

    • At all times:  >= i (single bottleneck)

  • During congestion epochs, set i < i, Eg:i = min{i , i}

    • Single bottleneck: Reverse queue growth within 1 RTT

  • Key:detect congestion:

    • a) purely at the edges => overlay technology

    • b) detect in a loss-less manner!

Implementation model

Interior Node Internetwork

(modeled to identify

congestion epochs)

Egress Edge (feeds back measured rate iduring congestion epochs)

Ingress Edge

(Shapes edge-to-edge

aggregate at rate i)

Implementation model

  • Overlay state:

    • Edge-to-edge VL association at ingress and egress

    • One token-bucket shaper (LBAP) per VL loop:

    • Rate = i(t) ; burstiness: 





Ingress shaper

Congestion detection hypothetical model
Congestion Detection: InternetworkHypothetical Model

  • Mark all packets if the interior queue crosses N

    • N is upper bound on transient burstiness during underload (I.e. when  i<)

  • If any marked packets seen by egress edge during measurement interval , iis fed back: begin epoch

  • If marked packets are not seen for a full interval , declare end of epoch and stop feedback of i.

Interior Node

(helps identify

congestion epochs)

Egress Edge (feeds back measured rate iduring congestion epochs)

Ingress Edge

(Shapes edge-to-edge


at rate i)

Impln overlay congestion detection
Impln: InternetworkOverlay Congestion Detection

  • Emulate prior model, albeit without Interior assist

    • N: aggregate burstiness bound

    • : per-VL accumulation bound.

  • Congestion epoch beginning:

    • Measure per-VL accumulation qi = (ii)

    • Per-VL accumulation qi exceeds 2 => epoch begins

  • Congestion epoch end:

    • qi <=  epoch ends

    • Hysteresis helps ensure that queue drains

Increase decrease dynamics
Increase/Decrease Dynamics Internetwork

  • Increase:

    • Additive increase of 1 pkt/every interval ( >= RTT)

  • Decrease:

    • i= min{i, i} every interval during the congestion epoch

  • Properties (single bottleneck):

    • queue guaranteed to reduce within of feedback

    • The lower rate is held till queue is drained

Input Rate Dynamics



 can be set larger than 0.5


Multi-bottleneck stability Internetwork

  • Incremental drain provisioned for incremental accumulation

  • Sum of input rates upper bounded; output rates lower bounded

Multiple bottleneck fairness
Multiple Bottleneck Fairness Internetwork

Throughput versus Number of Bottlenecks


delay fairness





Linear Network: 1 flow (VL) crosses k bottlenecks. Each bottleneck has 4 cross flows (VLs).

Overlay bandwidth services
Overlay Internetwork Bandwidth Services

  • Basic Services: no admission control

    • “Better” best-effort services

    • Denial-of-service attack isolation support

    • Weighted proportional/priority services

  • Advanced services: edge-based admission control

    • Assured service emulation

    • “Quasi-leased-line” service

  • Key: no upgrades; only configuration reqts…

Isolation of denial of service flooding
Isolation of Denial of Service/Flooding Internetwork

TCP starting at 0.0s

UDP flood starting at 5.0s

Edge based assured service emulation

r + InternetworkD

r =

min(r, bASm, bBE(m-a)+a)

if no congestion

if congestion

1 > bAS > bBE >> 0

Edge-based Assured Service Emulation

  • BackoffDifferentiation Policy:

  • Backoff little (bas) when below assurance (a),

  • Backoff (bas) same as best effort when above assurance (a)

  • Backoff differentiation quicker than increase differentiation

  • Service could be potentially oversubscribed (like frame-relay)

    • Unsatisfied assurances just use heavier weight.

Bandwidth assurances
Bandwidth Assurances Internetwork

Flow 1 with 4 Mbps assured

+ 3 Mbps best effort

Flow 2 with 3 Mbps best effort

Quasi leased line qll

if no congestion Internetwork

r + D

r =

max(a, bBE(m-a)+a)

if congestion

1 > bBE >> 0

Quasi-Leased Line (QLL)

  • Assume admission control and route-pinning (MPLS LSPs).

  • Provide bandwidth guarantee.

  • Key: No delay or jitter guarantees!

    • Adaptation in O(RTT) timescales

    • Average delay can be managed by limiting total and per-VL allocations (managed delay)

  • Policy:

Quasi leased line example

Best-effort VL starts at t=0 and fully utilizes 100 Mbps bottleneck.

Background QLL starts with rate 50Mbps

Best-effort VL quickly adapts to new rate.

Quasi-Leased Line Example

Best-effort rate limit versus time

Quasi leased line example cont

Starting QLL incurs backlog. bottleneck.

Unlike TCP, VL traffic trunks backoff without requiring loss and without bottleneck assistance.

Quasi-Leased Line Example (cont)

Bottleneck queue versus time

Requires more buffers: larger max queue

Quasi leased line cont

q < bottleneck.



Quasi-Leased Line (cont.)

Worst-case queue vs Fraction of capacity for QLLs

Single bottleneck analysis:

B/w-delay products

For b=.5, q=1 bw-rtt

Simulated QLL w/

edge-to-edge control.

Signaling configuration issues
Signaling/Configuration Issues bottleneck.

  • Simple: Each edge-box independently sets up loops only with other edges it intends to communicate

    • Address-prefix list based configuration for VPN application

    • Minimal overhead to maintain the loop: a leaky bucket, 8-bytes every 250 ms or so of overhead

  • ISP configures ONE separate class at potential bottlenecks for overlay controlled traffic

  • Scalable to inter-domain VPNs as long as each edge does not have to manage > 100s of loops

  • Properties: Bounded scalability, simplified interior configuration, incremental deployment, simple set of overlay services.

Edge to edge principle
Edge-to-Edge bottleneck. Principle ?

  • Tradeoff between public and private network philosophies:

  • Private network characteristics:

    • Differentiated Svcs, simple forms of overlay QoS

    • Bounded scalability and heterogeneity

      • Edge-to-edge loops, queue bounds, policy/BB scalability, bridging approach to inter-domain QoS

  • Public network characteristics:

    • Incremental deployment. O(1) complexity.

    • Stateless interior inter-network

    • Minimal interior upgrades, configuration support.

    • Use of robust, stable closed-loop control for efficiency and adaptation in O(RTT) timescales.

Current work
Current Work bottleneck.

  • With bottlenecks consolidated at the edge:

    • What diff-serv PHBs or remote scheduler functionalities can be emulated from the edge ?

    • What is the impact of congestion control properties and rate of convergence on attainable set of services ?

  • Areas:

    • Application-level QoS: edge-to-end problem

    • Dynamic (short-term) services

    • Congestion-sensitive pricing: congestion info at the edge

      • Edge-based contracting/bidding frameworks

    • Point-to-set svcs: more economic value than pt-to-pt svcs

      • Dynamic provisioning for statistical muxing gains

Summary bottleneck.

  • Private Networks vs Public Networks

    • QoS vs Congestion Control vsThrowing bandwidth

  • QoS DeploymentL

    • Simplified overlay QoS architecture

      • Intangibles: deployment, configuration advantages

  • Edge-based Building Blocks & Overlay services:

    • A closed-loop QoS building block

    • Basic services, advanced services