an improved hop by hop interest shaper for congestion control in named data networking n.
Skip this Video
Loading SlideShow in 5 Seconds..
An Improved Hop-by-hop Interest Shaper for Congestion Control in Named Data Networking PowerPoint Presentation
Download Presentation
An Improved Hop-by-hop Interest Shaper for Congestion Control in Named Data Networking

Loading in 2 Seconds...

play fullscreen
1 / 17

An Improved Hop-by-hop Interest Shaper for Congestion Control in Named Data Networking - PowerPoint PPT Presentation

  • Uploaded on

An Improved Hop-by-hop Interest Shaper for Congestion Control in Named Data Networking. Yaogong Wang, NCSU Natalya Rozhnova , UPMC Ashok Narayanan , Cisco Dave Oran, Cisco Injong Rhee, NCSU. NDN Congestion control. Two important factors to consider:

I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
Download Presentation

PowerPoint Slideshow about 'An Improved Hop-by-hop Interest Shaper for Congestion Control in Named Data Networking' - oistin

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
an improved hop by hop interest shaper for congestion control in named data networking

An Improved Hop-by-hop Interest Shaper for Congestion Control in Named Data Networking

Yaogong Wang, NCSU

Natalya Rozhnova, UPMC

Ashok Narayanan, Cisco

Dave Oran, Cisco

Injong Rhee, NCSU

ndn congestion control
NDN Congestion control
  • Two important factors to consider:
  • Receiver-driven: one interest generates one data packet
  • Symmetric: Content retrieved in response to an interest traverses the same path in reverse
  • Content load forwarded on a link is directly related to interests previously received on that link
  • Given these properties, shaping interests can serve to control content load and therefore proactively avoid congestion.
  • There are multiple schemes that rely on slowing down interests to achieve congestion avoidance or resolution
  • But, detecting the congestion in question is not simple
    • Because it appears on the other side of the link where interests can be slowed
interest shaping
Interest shaping
  • Different schemes have been proposed
  • HoBHIS
    • First successful scheme, demonstrated the feasibility of this method
    • Slows down interests on the hop after congestion
    • Relies on backpressure to alleviate congestion
    • Runs per-flow AIMD scheme to manage outstanding interests
    • Tracks estimated RTT as a mechanism to rapidly detect congestion & loss
    • Endpoints control flow requests by shaping interest issue rate
    • Main congestion control operates end-to-end, some hop-by-hop shaping for special cases
basic interest shaping
Basic interest shaping
  • Assume constant ratio r of content-size/interest-size
  • Simple unidirectional flow with link rate c
  • Ingress interest rate of c/r causes egress content rate of c
  • If we shape egress interest rate to c/r, remote content queue will not be overloaded
  • Issues with varying content size, size ratio, link rate, etc.
  • But the biggest issue is…
what about interests
What about interests?
  • Interests consume bandwidth
    • (specifically, c/r in the reverse direction)
  • Bidirectional data flow also implies bidirectional interest flow
  • Therefore, the reverse path is not available to carry cbandwidth of data, it also needs to carry some interests
  • And similarly, the rate of interests carried in the reverse direction cannot budget the forward path entirely for data, it needs to leave space for forward interests as well
  • Ordinarily there is no way to predict and therefore account for interests coming in the other direction, but…
  • There is a recursive dependence between the interest shaping rate in the forward and reverse directions.
problem formulation
Problem formulation
  • We can formulate a mutual bidirectional optimization as follows
  • u(.) is link utility function
  • This must be proportionally fair in both direction, to avoid starvation We propose log(s)as utility function
  • i1 = received forward interest load
  • i2 = received reverse interest load
  • c1 = forward link bandwidth
  • c2 = reverse link bandwidth
  • r1 = ratio of received content size to sent interests size
  • r2 = ratio of sent contents size to received interests size
optimal solution
Optimal solution
  • Feasible region is convex
  • First solve for infinite load in both directions
  • Optimal solutions at the Lagrange points marked with X
  • If Lagrangian points do not lie within feasible region (most common case), convert to equality constraints and solve
finite load scenarios
Finite load scenarios
  • Optimal shaping rate assumes unbounded load in both directions
    • We can’t model instantaneously varying load in a closed-form solution
  • If one direction is underloaded, fewer interests need to travel in the reverse direction to generate the lower load
  • As a result, the local shaping algorithm need not leave as much space for interests in the reverse direction
    • Extreme case: unidirectional traffic flow
  • Actual shaping rate needs to vary between two extremes depending on actual load in the reverse path
  • BUT, we don’t want to rely on signaling reverse path load
practical interest shaping algorithm
Practical interest shaping algorithm
  • We observe that each side can independently compute both expected shaping rates
  • Our algorithm observes the incoming interest rate, compares it to the expected incoming interest shaping rate, and adjusts our outgoing interest rate between these two extremes
  • On the router, interests and contents are separated in output queues. Interests are shaped as per the equation above, and contents flow directly to the output queue.
explicit congestion notification
Explicit congestion notification
  • When an interest cannot be enqueued into the interest shaper queue, it is rejected
  • Instead of dropping it, we return it to the downstream hop in the form of a “Congestion-NACK”
  • This NACK is forwarded back towards the client in the place of the requested content
    • Consumes the PIT entries on the way
  • Note that the bandwidth consumed by this NACK has already been accounted for by the interest that caused it to be generated
    • Therefore, in our scheme Congestion-NACKs cannot exacerbate congestion
  • Clients or other nodes can react to these signals
  • In our current simulations, clients implement simple AIMD window control, with the NACK used to cause decrease
client window and queue evolution
Client window and queue evolution
  • Queue depth on bottleneck queues is small
    • 1 packet for homogeneous RTT case
    • Varies slightly more in heterogeneous RTT case, but is quite low (<17 packets)
  • Client window evolution is quite fair
benefits of our scheme
Benefits of our scheme
  • Optimally handles interest shaping for bidirectional traffic
  • No signaling or message exchange required between routers
    • Corollary: no trust required between peers
  • No requirement of flow identification by intermediaries
  • Fair and effective bandwidth allocation on highly asymmetric links
  • Congestion NACKs offer a timely and reliable congestion signal
  • Congestion is detected downstream of the bottleneck link
future work
Future work
  • Use congestion detection and/or NACKs to offer dynamic reroute and multi-path load balancing
  • Use NACKs as backpressure mechanism in the network to handle unco-operative clients
  • Investigate shaper under different router AQM schemes (e.g. RED, CoDEL, PIE) and client implementations (e.g. CUBIC).