buffalo bloom filter forwarding architecture for large organizations
Download
Skip this Video
Download Presentation
BUFFALO: Bloom Filter Forwarding Architecture for Large Organizations

Loading in 2 Seconds...

play fullscreen
1 / 26

BUFFALO: Bloom Filter Forwarding Architecture for Large Organizations - PowerPoint PPT Presentation


  • 417 Views
  • Uploaded on

BUFFALO: Bloom Filter Forwarding Architecture for Large Organizations . Minlan Yu Princeton University [email protected] Joint work with Alex Fabrikant, Jennifer Rexford. Large-scale SPAF Networks. Large layer-2 network on flat addresses Simple for configuration and management

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'BUFFALO: Bloom Filter Forwarding Architecture for Large Organizations' - Patman


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
buffalo bloom filter forwarding architecture for large organizations

BUFFALO: Bloom Filter Forwarding Architecture for Large Organizations

Minlan Yu

Princeton University

[email protected]

Joint work with Alex Fabrikant, Jennifer Rexford

large scale spaf networks
Large-scale SPAF Networks
  • Large layer-2 network on flat addresses
    • Simple for configuration and management
    • Useful for enterprise and data center networks
  • SPAF:Shortest Path on Addresses that are Flat
    • Flat addresses (e.g., MAC addresses)
    • Shortest path routing
      • Link state, distance vector, spanning tree protocol

H

S

H

S

H

S

H

S

S

S

S

S

S

H

H

S

S

H

H

S

S

H

challenges
Challenges
  • Scaling control plane
    • Topology and host information dissemination
    • Route computation
    • Recent practical solutions (TRILL, SEATTLE)
  • Scaling data plane
    • Forwarding table growth (in # of hosts and switches)
    • Increasing link speed
    • Remains a challenge
existing data plane techniques
Existing Data Plane Techniques
  • TCAM (Ternary content-addressable memory)
    • Expensive, power hungry
  • Hash table in SRAM
    • Crash or perform poorly when out of memory
    • Difficult and expensive to upgrade memory
buffalo bloom filter forwarding
0

0

0

1

0

0

0

1

0

1

0

1

0

0

0

BUFFALO: Bloom Filter Forwarding
  • Bloom filters in fast memory (SRAM)
    • A compact data structure for a set of elements
    • Calculate s hash functions to store element x
    • Easy to check membership
    • Reduce memory at the expense of false positives

x

Vm-1

V0

h1(x)

h2(x)

h3(x)

hs(x)

buffalo bloom filter forwarding6
BUFFALO: Bloom Filter Forwarding
  • One Bloom filter (BF) per next hop
    • Store all the addresses that are forwarded to that next hop

Bloom Filters

Nexthop 1

query

hit

Nexthop 2

packet

……

Nexthop T

optimize memory usage
Optimize Memory Usage
  • Goal: Minimize overall false-positive rate
    • The probability that one BF has a false positive
  • Input:
    • Fast memory size M
    • Number of destinations per next hop
    • The maximum number of hash functions
  • Output: the size of each Bloom filter
    • Larger Bloom filter for the next hop that has more destinations
optimize memory usage cont
Optimize Memory Usage (cont.)
  • Constraints
    • Memory constraint
      • Sum of all BF sizes <= fast memory size M
    • Bound on number of hash functions
      • To bound CPU calculation time
      • Bloom filters share the same hash functions
  • Proved to be a convex optimization problem
    • An optimal solution exists
    • Solved by IPOPT (Interior Point OPTimizer)
minimize false positives
Minimize False Positives
  • A FIB with 200K entries, 10 next hop
  • 8 hash functions
comparing with hash table
Comparing with Hash Table
  • Save 65% memory with 0.1% false positive

65%

handle false positives
Handle False Positives
  • Design goals
    • Should not modify the packet
    • Never go to slow memory
    • Ensure timely packet delivery
  • BUFFALO solution
    • Exclude incoming interface
      • Avoid loops in one false positive case
    • Random selection from matching next hops
      • Guarantee reachability with multiple false positives
one false positive
One False Positive
  • Most common case: one false positive
    • When there are multiple matching next hops
    • Avoid sending to incoming interface
  • We prove that there is at most a 2-hop loop with a stretch <= l(AB)+l(BA)

Shortest path

A

dst

B

False positive

multiple false positives
Multiple False Positives
  • Handle multiple false positives
    • Random selection from matching next hops
    • Random walk on shortest path tree plus a few false positive links
    • To eventually find out a way to the destination

False positive

link

Shortest path tree

for dst

dst

multiple false positives cont
Multiple False Positives (cont.)
  • Provable stretch bound
    • Expected stretch with k false positive case is at most
    • Proved by random walk theories
  • Stretch bound actually not bad
    • False positives are independent
      • Different switches use different hash functions
    • k false positives for one packet are rare
    • Probability of k false positives decreases exponentially with k
stretch in campus network
Stretch in Campus Network

When fp=0.001%

99.9% of the packets have no stretch

When fp=0.5%, 0.0002% packets have a stretch 6 times of shortest path length

0.0003% packets have a stretch of shortest path length

handle routing dynamics
Handle Routing Dynamics
  • Counting Bloom filters (CBF)
    • CBF can handle adding/deleting elements
    • Take more memory than Bloom filters (BF)
  • BUFFALO solution: use CBF to assist BF
    • CBF in slow memory to handle FIB updates
    • Occasionally re-optimize BF size based on CBF

CBF

BF

Hard to expand to size 4

Easy to contract CBF to size 4

buffalo switch architecture
BUFFALO Switch Architecture
  • Prototype implemented in kernel-level Click
prototype evaluation
Prototype Evaluation
  • Environment
    • 3.0 GHz 64-bit Intel Xeon
    • 2 MB L2 data cache, used as fast memory size M
  • Forwarding table
    • 10 next hops
    • 200K entries
prototype evaluation23
Prototype Evaluation
  • Peak forwarding rate
    • 365 Kpps, 1.9 μs per packet
    • 10% faster than hash-based solution EtherSwitch
  • Performance with FIB updates
    • 10.7 μs to update a route
    • 0.47 s to reconstruct BFs based on CBFs
      • On another core without disrupting packet lookup
      • Swap in new BFs is fast
conclusion
Conclusion
  • Improve data plane scalability in SPAF
    • Complementary to recent Ethernet advances
  • Three nice properties of BUFFALO
    • Small memory requirement
    • Gracefully increase stretch with the growth of forwarding table
    • React quickly to routing updates
papers and future work
Papers and Future Work
  • Paper in submission to CoNext
    • www.cs.princeton.edu/~minlanyu/writeup/conext09.pdf
  • Preliminary work in WREN Workshop
    • Use Bloom filters to reduce memory for enterprise edge routers
  • Future work
    • Reduce transient loops in OSPF using Random walk in shortest path tree
thanks
Thanks!
  • Questions?
ad