1 / 12

LOAD SPREADING APPROACHES David Allan, Scott Mansfield, Eric Gray, Janos Farkas Ericsson

LOAD SPREADING APPROACHES David Allan, Scott Mansfield, Eric Gray, Janos Farkas Ericsson. INTRODUCTION. We want load spreading to operate within the layer We do not have per hop “layer violations” in frame processing OAM can “fate share” with ECMP flows without having to impersonate payload

viola
Download Presentation

LOAD SPREADING APPROACHES David Allan, Scott Mansfield, Eric Gray, Janos Farkas Ericsson

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. LOAD SPREADING APPROACHES David Allan, Scott Mansfield, Eric Gray, Janos FarkasEricsson

  2. INTRODUCTION • We want load spreading to operate within the layer • We do not have per hop “layer violations” in frame processing • OAM can “fate share” with ECMP flows without having to impersonate payload • This requires a frame to carry entropy information such that an equal cost multi-path decision can be made at each hop • Entropy information is populated by a DPI function at the network ingress • The expectation is that next hop interface selection between the set of next hop paths will be some function of the entropy information

  3. FIRST, THE OBVIOUS; IF THE SPREADING FuNCTION IS THE SAME AT EVERY HOP: Common hash and common next hop count at each stage is pathological The size of the entropy field will not change this

  4. WITH A UNIQUE HASH AT EACH HOP/Stage Need to re-randomize each subset of traffic at each stage in the hierarchy to use all links

  5. HoW Entropy label is expected to work • Each node determines the next hop as a function of both the entropy information in the packet, and locally generated random information • In order to try to perfectly re-randomize any arbitrary subset of flow labels received • First blush is BIG is better for quantity of entropy carried in the packet, but is this really true?

  6. Thought Experiment • Question: How much entropy information do we need? • Experiment: For a given size of entropy information field, lets run it through a number of stages and see if and observe how it degenerates when compared to perfection • For 3 interfaces at each hop, perfection is 1/3, 1/9, 1/27 etc. of original number of flows assuming we perfectly randomize at every hop • Practical reality is we will not perfectly randomize progressive subsets of the original complete set of flows • The example shows the average of the CV from “perfection” for 10 tries of 5 stages of randomization with the fan out being 3 interfaces per stage

  7. Sidebar – quality of randomization • Using a function of a random number and packet entropy information for randomization at each hop will tend to produce subsets correlated in some fashion • This leads to poorer randomization at each hop • Interface = f(packet entropy, random value) % (# of interfaces) • What we want is a non-lossy and non-correlated transform in order to maximize randomization of any arbitrary subset of flow IDs • Using a table lookup with the table populated with random values allows us to produce non-lossy uncorrelated subsets • Interface = table [packet entropy] % (# of interfaces) • However, the larger the packet entropy field is, the less practical a table look up is to implement • hence we illustrate results for both in the large entropy field case

  8. HOW MUCH ENTROPY INFORMATION IS NEEDED?

  9. ADVANTAGES OF USING THE B-VID as a FLOW LABEL • The CV is still < 20% of load after 5 stages of switching for 12 bits of entropy • Use of VID offers backwards compatibility with older implementations and offering some of the benefits of ECMP load spreading while using those implementations • They cannot do the fine grained processing of the B-VID such that each VID value gets unique treatment as it would blow out the FDB • But each SVL VID range normally associated with an MSTI could have a randomly selected outgoing interface as part of FDB design • Hence the 6 bits of entropy example on the curves • This still requires SPB “intelligence” at nodes utilizing existing implementations • A completely “brain dead” nodes that simply behaves as a shared segment will cause packet duplication when flooding… as all SPB/ECMP peers are receiving promiscuously • This is true no matter what flow label approach we adopt

  10. ADVANTAGES OF USING THE B-VID as a FLOW LABEL/2 • Much smaller entropy label allows randomization transforms that produce an uncorrelated set • Indirection table allows for superior randomization • Much smaller entropy label simplifies troubleshooting • A full “blind” test needs to exercise only 4094 path permutations • 802.1ag CFM will require minimal or no changes to work

  11. CONCLUSIONS • We do not need a large entropy token to get acceptable results • There are advantages to using the B-VID as such a token • We can offer some ECMP benefits with primarily control plane changes now • We can incrementally modify the technology base over time to improve the quality of load spreading • O(MSTI) evolving to O(B-VID) for the amount of per-hop entropy available • The edge function is common • B-VID = f(entropy source) • OAM procedures are simplified and can work with the existing toolset

More Related