1 / 26

BUFFALO: Bloom Filter Forwarding Architecture for Large Organizations

BUFFALO: Bloom Filter Forwarding Architecture for Large Organizations . Minlan Yu Princeton University minlanyu@cs.princeton.edu Joint work with Alex Fabrikant, Jennifer Rexford. Large-scale SPAF Networks. Large layer-2 network on flat addresses Simple for configuration and management

Patman
Download Presentation

BUFFALO: Bloom Filter Forwarding Architecture for Large Organizations

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. BUFFALO: Bloom Filter Forwarding Architecture for Large Organizations Minlan Yu Princeton University minlanyu@cs.princeton.edu Joint work with Alex Fabrikant, Jennifer Rexford

  2. Large-scale SPAF Networks • Large layer-2 network on flat addresses • Simple for configuration and management • Useful for enterprise and data center networks • SPAF:Shortest Path on Addresses that are Flat • Flat addresses (e.g., MAC addresses) • Shortest path routing • Link state, distance vector, spanning tree protocol H S H S H S H S S S S S S H H S S H H S S H

  3. Challenges • Scaling control plane • Topology and host information dissemination • Route computation • Recent practical solutions (TRILL, SEATTLE) • Scaling data plane • Forwarding table growth (in # of hosts and switches) • Increasing link speed • Remains a challenge

  4. Existing Data Plane Techniques • TCAM (Ternary content-addressable memory) • Expensive, power hungry • Hash table in SRAM • Crash or perform poorly when out of memory • Difficult and expensive to upgrade memory

  5. 0 0 0 1 0 0 0 1 0 1 0 1 0 0 0 BUFFALO: Bloom Filter Forwarding • Bloom filters in fast memory (SRAM) • A compact data structure for a set of elements • Calculate s hash functions to store element x • Easy to check membership • Reduce memory at the expense of false positives x Vm-1 V0 h1(x) h2(x) h3(x) hs(x)

  6. BUFFALO: Bloom Filter Forwarding • One Bloom filter (BF) per next hop • Store all the addresses that are forwarded to that next hop Bloom Filters Nexthop 1 query hit Nexthop 2 packet …… Nexthop T

  7. BUFFALO Challenges

  8. BUFFALO Challenges

  9. Optimize Memory Usage • Goal: Minimize overall false-positive rate • The probability that one BF has a false positive • Input: • Fast memory size M • Number of destinations per next hop • The maximum number of hash functions • Output: the size of each Bloom filter • Larger Bloom filter for the next hop that has more destinations

  10. Optimize Memory Usage (cont.) • Constraints • Memory constraint • Sum of all BF sizes <= fast memory size M • Bound on number of hash functions • To bound CPU calculation time • Bloom filters share the same hash functions • Proved to be a convex optimization problem • An optimal solution exists • Solved by IPOPT (Interior Point OPTimizer)

  11. Minimize False Positives • A FIB with 200K entries, 10 next hop • 8 hash functions

  12. Comparing with Hash Table • Save 65% memory with 0.1% false positive 65%

  13. BUFFALO Challenges

  14. Handle False Positives • Design goals • Should not modify the packet • Never go to slow memory • Ensure timely packet delivery • BUFFALO solution • Exclude incoming interface • Avoid loops in one false positive case • Random selection from matching next hops • Guarantee reachability with multiple false positives

  15. One False Positive • Most common case: one false positive • When there are multiple matching next hops • Avoid sending to incoming interface • We prove that there is at most a 2-hop loop with a stretch <= l(AB)+l(BA) Shortest path A dst B False positive

  16. Multiple False Positives • Handle multiple false positives • Random selection from matching next hops • Random walk on shortest path tree plus a few false positive links • To eventually find out a way to the destination False positive link Shortest path tree for dst dst

  17. Multiple False Positives (cont.) • Provable stretch bound • Expected stretch with k false positive case is at most • Proved by random walk theories • Stretch bound actually not bad • False positives are independent • Different switches use different hash functions • k false positives for one packet are rare • Probability of k false positives decreases exponentially with k

  18. Stretch in Campus Network When fp=0.001% 99.9% of the packets have no stretch When fp=0.5%, 0.0002% packets have a stretch 6 times of shortest path length 0.0003% packets have a stretch of shortest path length

  19. BUFFALO Challenges

  20. Handle Routing Dynamics • Counting Bloom filters (CBF) • CBF can handle adding/deleting elements • Take more memory than Bloom filters (BF) • BUFFALO solution: use CBF to assist BF • CBF in slow memory to handle FIB updates • Occasionally re-optimize BF size based on CBF CBF BF Hard to expand to size 4 Easy to contract CBF to size 4

  21. BUFFALO Switch Architecture • Prototype implemented in kernel-level Click

  22. Prototype Evaluation • Environment • 3.0 GHz 64-bit Intel Xeon • 2 MB L2 data cache, used as fast memory size M • Forwarding table • 10 next hops • 200K entries

  23. Prototype Evaluation • Peak forwarding rate • 365 Kpps, 1.9 μs per packet • 10% faster than hash-based solution EtherSwitch • Performance with FIB updates • 10.7 μs to update a route • 0.47 s to reconstruct BFs based on CBFs • On another core without disrupting packet lookup • Swap in new BFs is fast

  24. Conclusion • Improve data plane scalability in SPAF • Complementary to recent Ethernet advances • Three nice properties of BUFFALO • Small memory requirement • Gracefully increase stretch with the growth of forwarding table • React quickly to routing updates

  25. Papers and Future Work • Paper in submission to CoNext • www.cs.princeton.edu/~minlanyu/writeup/conext09.pdf • Preliminary work in WREN Workshop • Use Bloom filters to reduce memory for enterprise edge routers • Future work • Reduce transient loops in OSPF using Random walk in shortest path tree

  26. Thanks! • Questions?

More Related