1 / 31

BUFFALO: Bloom Filter Forwarding Architecture for Large Organizations

BUFFALO: Bloom Filter Forwarding Architecture for Large Organizations . Minlan Yu Princeton University minlanyu@cs.princeton.edu Joint work with Alex Fabrikant, Jennifer Rexford. Challenges in Edge Networks. Edge networks Enterprise and data center networks Large number of hosts

jada
Download Presentation

BUFFALO: Bloom Filter Forwarding Architecture for Large Organizations

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. BUFFALO: Bloom Filter Forwarding Architecture for Large Organizations Minlan Yu Princeton University minlanyu@cs.princeton.edu Joint work with Alex Fabrikant, Jennifer Rexford

  2. Challenges in Edge Networks • Edge networks • Enterprise and data center networks • Large number of hosts • Tens or even hundreds of thousands of hosts • Dynamic hosts • Host mobility, virtual machine migration • Cost conscious • Equipment costs, network-management costs Need an easy-to-manage and scalable network

  3. Flat Addresses • Self-configuration • Hosts: MAC addresses • Switches: self-learning • Simple host mobility • Flat addresses are location-independent • But, poor scalability and performance • Flooding frames to unknown destinations • Inefficient delivery of frames over spanning tree • Large forwarding tables (one entry per address)

  4. Improving Performance and Scalability • Recent proposals on improving control plane • TRILL (IETF), SEATTLE (Sigcomm’08) • Scaling information dissemination • Distribute topology and host information intelligently rather than flooding • Improving routing performance • Use shortest path routing rather than spanning tree • Scaling with large forwarding tables • This talk: BUFFALO

  5. Large-scale SPAF Networks • SPAF:Shortest Path on Addresses that are Flat • Shortest paths: scalability and performance • Flat addresses: self-configuration and mobility • Data plane scalability challenge • Forwarding table growth (in # of hosts and switches) • Increasing link speed H S H S H S H S S S S S S H H S S H H S S H

  6. State of the Art • Hash table in SRAM to store forwarding table • Map MAC addresses to next hop • Hash collisions: extra delay • Overprovision to avoid running out of memory • Perform poorly when out of memory • Difficult and expensive to upgrade memory 00:11:22:33:44:55 00:11:22:33:44:66 … … aa:11:22:33:44:77

  7. 0 0 0 1 0 0 0 1 0 1 0 1 0 0 0 Bloom Filters • Bloom filters in fast memory (SRAM) • A compact data structure for a set of elements • Calculate s hash functions to store element x • Easy to check membership • Reduce memory at the expense of false positives x Vm-1 V0 h1(x) h2(x) h3(x) hs(x)

  8. BUFFALO: Bloom Filter Forwarding • One Bloom filter (BF) per next hop • Store all addresses forwarded to that next hop Bloom Filters Nexthop 1 query hit Nexthop 2 Packet destination …… Nexthop T

  9. BUFFALO Challenges

  10. BUFFALO Challenges

  11. Optimize Memory Usage • Consider fixed forwarding table • Goal: Minimize overall false-positive rate • Probability one or more BFs have a false positive • Input: • Fast memory size M • Number of destinations per next hop • The maximum number of hash functions • Output: the size of each Bloom filter • Larger BF for next-hops with more destinations

  12. Constraints and Solution • Constraints • Memory constraint • Sum of all BF sizes fast memory size M • Bound on number of hash functions • To bound CPU calculation time • Bloom filters share the same hash functions • Proved to be a convex optimization problem • An optimal solution exists • Solved by IPOPT (Interior Point OPTimizer)

  13. Minimize False Positives • Forwarding table with 200K entries, 10 next hop • 8 hash functions • Optimization runs in about 50 msec

  14. Comparing with Hash Table • Save 65% memory with 0.1% false positives • More benefits over hash table • Performance degrades gracefully as tables grow • Handle worst-case workloads well 65%

  15. BUFFALO Challenges

  16. False Positive Detection • Multiple matches in the Bloom filters • One of the matches is correct • The others are caused by false positives Bloom Filters Multiple hits Nexthop 1 query Nexthop 2 Packet destination …… Nexthop T

  17. Handle False Positives • Design goals • Should not modify the packet • Never go to slow memory • Ensure timely packet delivery • BUFFALO solution • Exclude incoming interface • Avoid loops in “one false positive” case • Random selection from matching next hops • Guarantee reachability with multiple false positives

  18. One False Positive • Most common case: one false positive • When there are multiple matching next hops • Avoid sending to incoming interface • Provably at most a two-hop loop • Stretch <= Latency(AB)+Latency(BA) Shortest path A dst B False positive

  19. Multiple False Positives • Handle multiple false positives • Random selection from matching next hops • Random walk on shortest path tree plus a few false positive links • To eventually find out a way to the destination False positive link Shortest path tree for dst dst

  20. Stretch Bound • Provable expected stretch bound • With k false positives, proved to be at most • Proved by random walk theories • However, stretch bound is actually not bad • False positives are independent • Probability of k false positives drops exponentially • Tighter bounds in special topologies • For tree, expected stretch is polynomial in k

  21. Stretch in Campus Network When fp=0.001% 99.9% of the packets have no stretch When fp=0.5%, 0.0002% packets have a stretch 6 times of shortest path length 0.0003% packets have a stretch of shortest path length

  22. BUFFALO Challenges

  23. Problem of Bloom Filters • Routing changes • Add/delete entries in BFs • Problem of Bloom Filters (BF) • Do not allow deleting an element • Counting Bloom Filters (CBF) • Use a counter instead of a bit in the array • CBFs can handle adding/deleting elements • But, require more memory than BFs

  24. Update on Routing Change • Use CBF in slow memory • Assist BF to handle forwarding-table updates • Easy to add/delete a forwarding-table entry CBF in slow memory Delete a route BF in fast memory

  25. Occasionally Resize BF • Under significant routing changes • # of addresses in BFs changes significantly • Re-optimize BF sizes • Use CBF to assist resizing BF • Large CBF and small BF • Easy to expand BF size by contracting CBF CBF BF Hard to expand to size 4 Easy to contract CBF to size 4

  26. BUFFALO Switch Architecture • Prototype implemented in kernel-level Click

  27. Prototype Evaluation • Environment • 3.0 GHz 64-bit Intel Xeon • 2 MB L2 data cache, used as fast memory size M • Forwarding table • 10 next hops • 200K entries

  28. Prototype Evaluation • Peak forwarding rate • 365 Kpps, 1.9 μs per packet • 10% faster than hash-based EtherSwitch • Performance with FIB updates • 10.7 μs to update a route • 0.47 s to reconstruct BFs based on CBFs • On another core without disrupting packet lookup • Swapping in new BFs is fast

  29. Conclusion • BUFFALO • Improves data plane scalability in SPAF networks • To appear in CoNEXT 2009 • Three properties: • Small, bounded memory requirement • Gracefully increase stretch with the growth of forwarding table • Fast reaction to routing updates

  30. Ongoing work • More theoretical analysis of stretch • Stretch in weighted graph • Stretch in ECMP (Equal Cost Multi Path) • Analysis on average stretch • Efficient access control in SPAF network • More complicated than forwarding table • Scaling access control using OpenFlow • Leverage properties in SPAF network

  31. Thanks! • Questions?

More Related