1 / 78

Packet Level Algorithms

Packet Level Algorithms. Michael Mitzenmacher. Goals of the Talk. Consider algorithms/data structures for measurement/monitoring schemes at the router level. Focus on packets, flows. Emphasis on my recent work, future plans. “Applied theory”.

raanan
Download Presentation

Packet Level Algorithms

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Packet Level Algorithms Michael Mitzenmacher

  2. Goals of the Talk • Consider algorithms/data structures for measurement/monitoring schemes at the router level. • Focus on packets, flows. • Emphasis on my recent work, future plans. • “Applied theory”. • Less on experiments, more on design/analysis of data structures for applications. • Hash-based schemes • Bloom filters and variants.

  3. Vision • Three-pronged research data. • Low: Efficient hardware implementations of relevant algorithms and data structures. • Medium: New, improved data structures and algorithms for old and new applications. • High: Distributed infrastructure supporting monitoring and measurement schemes.

  4. Background / Building Blocks • Multiple-choice hashing • Bloom filters

  5. Multiple Choices: d-left Hashing • Split hash table into d equal subtables. • To insert, choose a bucket uniformly for each subtable. • Place item in a cell in the least loaded bucket, breaking ties to the left.

  6. Properties of d-left Hashing • Analyzable using both combinatorial methods and differential equations. • Maximum load very small: O(log log n). • Differential equations give very, very accurate performance estimates. • Maximum load is extremely close to average load for small values of d.

  7. Example of d-left hashing Average load 6.4 • Consider 3-left performance. Average load 4

  8. Example of d-left hashing • Consider 4-left performance with average load of 6, using differential equations. Alternating insertions/deletions Steady state Insertions only

  9. Review: Bloom Filters • Given a set S = {x1,x2,x3,…xn} on a universe U, want to answer queries of the form: • Bloom filter provides an answer in • “Constant” time (time to hash). • Small amount of space. • But with some probability of being wrong. • Alternative to hashing with interesting tradeoffs.

  10. B 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 B 0 1 0 0 1 0 1 0 0 1 1 1 0 1 1 0 B 0 1 0 0 1 0 1 0 0 1 1 1 0 1 1 0 B 0 1 0 0 1 0 1 0 0 1 1 1 0 1 1 0 Bloom Filters Start with an m bit array, filled with 0s. Hash each item xjin S k times. If Hi(xj) = a, set B[a] = 1. To check if y is in S, check B at Hi(y). All k values must be 1. Possible to have a false positive; all k values are 1, but y is not in S. n items m= cn bits k hash functions

  11. False Positive Probability • Pr(specific bit of filter is 0) is • If r is fraction of 0 bits in the filter then false positive probability is • Approximations valid as r is concentrated around E[r]. • Martingale argument suffices. • Find optimal at k = (ln 2)m/n by calculus. • So optimal fpp is about (0.6185)m/n n items m= cn bits k hash functions

  12. Example m/n = 8 Opt k = 8 ln 2 = 5.45... n items m= cn bits k hash functions

  13. Handling Deletions • Bloom filters can handle insertions, but not deletions. • If deleting xi means resetting 1s to 0s, then deleting xi will “delete” xj. xixj B 0 1 0 0 1 0 1 0 0 1 1 1 0 1 1 0

  14. B 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 B B B 0 0 0 3 2 1 0 0 0 0 0 0 0 1 0 0 0 0 1 2 2 0 0 0 0 0 0 3 1 3 2 2 1 1 1 1 0 0 0 2 1 1 1 1 1 0 0 0 Counting Bloom Filters Start with an m bit array, filled with 0s. Hash each item xjin S k times. If Hi(xj) = a, add 1 to B[a]. To delete xjdecrement the corresponding counters. Can obtain a corresponding Bloom filter by reducing to 0/1.

  15. Counting Bloom Filters: Overflow • Must choose counters large enough to avoid overflow. • Poisson approximation suggests 4 bits/counter. • Average load using k = (ln 2)m/n counters is ln 2. • Probability a counter has load at least 16: • Failsafes possible. • We assume 4 bits/counter for comparisons.

  16. Bloomier Filters • Instead of set membership, keep an r-bit function value for each set element. • Correct value should be given for each set element. • Non-set elements should return NULL with high probability. • Mutable version: function values can change. • But underlying set can not. • First suggested in paper by Chazelle, Kilian, Rubenfeld, Tal.

  17. From Low to High • Low • Hash Tables for Hardware • New Bloom Filter/Counting Bloom Filter Constructions (Hardware Friendly) • Medium • Approximate Concurrent State Machines • Distance-Sensitive Bloom Filters • High • A Distributed Hashing Infrastructure

  18. Low Level : Better Hash Tables for Hardware • Joint work with Adam Kirsch. • Simple Summaries for Hashing with Choices. • The Power of One Move: Hashing Schemes for Hardware.

  19. Perfect Hashing Approach Element 1 Element 2 Element 3 Element 4 Element 5 Fingerprint(4) Fingerprint(5) Fingerprint(2) Fingerprint(1) Fingerprint(3)

  20. Near-Perfect Hash Functions • Perfect hash functions are challenging. • Require all the data up front – no insertions or deletions. • Hard to find efficiently in hardware. • In [BM96], we note that d-left hashing can give near-perfect hash functions. • Useful even with insertions, deletions. • Some loss in space efficiency.

  21. Near-Perfect Hash Functions via d-left Hashing • Maximum load equals 1 • Requires significant space to avoid all collisions, or some small fraction of spillovers. • Maximum load greater than 1 • Multiple buckets must be checked, and multiple cells in a bucket must be checked. • Not perfect in space usage. • In practice, 75% space usage is very easy. • In theory, can do even better.

  22. Hash Table Design : Example • Desired goals: • At most 1 item per bucket. • Minimize space. • And minimize number of hash functions. • Small amount of spillover possible. • We model as a constant fraction, e.g. 0.2%. • Can be placed in a content-addressable memory (CAM) if small enough.

  23. Basic d-left Scheme • For hash table holding up to n elements, with max load 1 per bucket, use 4 choices and 2n cells. • Spillover of approximately 0.002n elements into CAM.

  24. xk Improvements from Skew • For hash table holding up to n elements, with max load 1 per bucket, use 4 choices and 1.8n cells. • Subtable sizes 0.79n, 0.51n, 0.32n, 0.18n. • Spillover still approximately 0.002n elements into CAM. • Subtable sizes optimized using differential equations, black-box optimization.

  25. Summaries to Avoid Lookups • In hardware, d choices of location can be done by parallelization. • Look at d memory banks in parallel. • But there’s still a cost: pin count. • Can we keep track of which hash function to use for each item, using a small summary? • Yes: use a Bloom-filter like structure to track. • Skew impacts summary performance; more skew better. • Uses small amount of on-chip memory. • Avoids multiple look-ups. • Special case of a Bloomier filter.

  26. Hash Tables with Moves • Cuckoo Hashing (Pagh, Rodler) • Hashed items need not stay in their initial place. • With multiple choices, can move item to another choice, without affecting lookups. • As long as hash values can be recomputed. • When inserting, if all spots are filled, new item kicks out an old item, which looks for another spot, and might kick out another item, and so on.

  27. Benefits and Problems of Moves • Benefit: much better space utilization. • Multiple choices, multiple items per bucket, can achieve 90+% with no spillover. • Drawback: complexity. • Moves required can grow like log n. • Constant on average. • Bounded maximum time per operation important in many settings. • Moves expensive. • Table usually in slow memory.

  28. Question : Power of One Move • How much leverage do we get by just allowing one move? • One move likely to be possible in practice. • Simple for hardware. • Analysis possible via differential equations. • Cuckoo hard to analyze. • Downside : some spillover into CAM.

  29. Comparison, Insertions Only • 4 schemes • No moves • Conservative : Place item if possible. If not, try to move earliest item that has not already replaced another item to make room. Otherwise spill over. • Second chance : Read all possible locations, and for each location with an item, check it it can be placed in the next subtable. Place new item as early as possible, moving up to 1 item left 1 level. • Second chance, with 2 per bucket. • Target of 0.2% spillover. • Balanced (all subtables the same) and skewed compared. • All done by differential equation analysis (and simulations match).

  30. Results of Moves : Insertions Only

  31. Conclusions, Moves • Even one move saves significant space. • More aggressive schemes, considering all possible single moves, save even more. (Harder to analyze, more hardware resources.) • Importance of allowing small amounts of spillover in practical settings.

  32. From Low to High • Low • Hash Tables for Hardware • New Bloom Filter/Counting Bloom Filter Constructions (Hardware Friendly) • Medium • Approximate Concurrent State Machines • Distance-Sensitive Bloom Filters • High • A Distributed Hashing Infrastructure

  33. Low- Medium: New Bloom Filters / Counting Bloom Filters • Joint work with Flavio Bonomi, Rina Panigrahy, Sumeet Singh, George Varghese.

  34. A New Approach to Bloom Filters • Folklore Bloom filter construction. • Recall: Given a set S = {x1,x2,x3,…xn} on a universe U, want to answer membership queries. • Method: Find an n-cell perfect hash function for S. • Maps set of n elements to n cells in a 1-1 manner. • Then keep bit fingerprint of item in each cell. Lookups have false positive <e. • Advantage: each bit/item reduces false positives by a factor of 1/2, vs ln 2 for a standard Bloom filter. • Negatives: • Perfect hash functions non-trivial to find. • Cannot handle on-line insertions.

  35. Near-Perfect Hash Functions • In [BM96], we note that d-left hashing can give near-perfect hash functions. • Useful even with deletions. • Main differences • Multiple buckets must be checked, and multiple cells in a bucket must be checked. • Not perfect in space usage. • In practice, 75% space usage is very easy. • In theory, can do even better.

  36. First Design : Just d-left Hashing • For a Bloom filter with n elements, use a 3-left hash table with average load 4, 60 bits per bucket divided into 6 fixed-size fingerprints of 10 bits. • Overflow rare, can be ignored. • False positive rate of • Vs. 0.000744 for a standard Bloom filter. • Problem:Too much empty, wasted space. • Other parametrizations similarly impractical. • Need to avoid wasting space.

  37. Just Hashing : Picture Empty Empty Bucket 0000111111 1010101000 0001110101 1011011100

  38. Key: Dynamic Bit Reassignment • Use 64-bit buckets: 4 bit counter, 60 bits divided equally among actual fingerprints. • Fingerprint size depends on bucket load. • False positive rate of 0.0008937 • Vs. 0.0004587 for a standard Bloom filter. • DBR: Within a factor of 2. • And would be better for larger buckets. • But 64 bits is a nice bucket size for hardware. • Can we remove the cost of the counter?

  39. DBR : Picture 000110110101 111010100001 101010101000 101010110101 010101101011 Bucket Count : 4

  40. Semi-Sorting • Fingerprints in bucket can be in any order. • Semi-sorting: keep sorted by first bit. • Use counter to track #fingerprints and #fingerprints starting with 0. • First bit can then be erased, implicitly given by counter info. • Can extend to first two bits (or more) but added complexity.

  41. DBR + Semi-sorting : Picture 000110110101 111010100001 101010101000 101010110101 010101101011 Bucket Count : 4,2

  42. DBR + Semi-Sorting Results • Using 64-bit buckets, 4 bit counter. • Semi-sorting on loads 4 and 5. • Counter only handles up to load 6. • False positive rate of 0.0004477 • Vs. 0.0004587 for a standard Bloom filter. • This is the tradeoff point. • Using 128-bit buckets, 8 bit counter, 3-left hash table with average load 6.4. • Semi-sorting all loads: fpr of 0.00004529 • 2 bit semi-sorting for loads 6/7: fpr of 0.00002425 • Vs. 0.00006713 for a standard Bloom filter.

  43. Additional Issues • Futher possible improvements • Group buckets to form super-buckets that share bits. • Conjecture: Most further improvements are not worth it in terms of implementation cost. • Moving items for better balance? • Underloaded case. • New structure maintains good performance.

  44. Improvements to Counting Bloom Filter • Similar ideas can be used to develop an improved Counting Bloom Filter structure. • Same idea: use fingerprints and a d-left hash table. • Counting Bloom Filters waste lots of space. • Lots of bits to record counts of 0. • Our structure beats standard CBFs easily, by factors of 2 or more in space. • Even without dynamic bit reassignment.

  45. Deletion Problem Suppose x and y have the same fingerprint z. x x Insert x x x y y y Insert y y z Delete x? z z

  46. Deletion Problem • When you delete, if you see the same fingerprint at two of the location choices, you don’t know which is the right one. • Take both out: false negatives. • Take neither out: false positives/eventual overflow.

  47. Handling the Deletion Problem • Want to make sure the fingerprint for an element cannot appear in two locations. • Solution: make sure it can’t happen. • Trick: uses (pseudo)random permtuations instead of hashing.

  48. Two Stages • Suppose we have d subtables, each with 2b buckets, and want f bit fingerprints. • Stage 1: Hash element x into b+f bits using a “strong” hash function H(x). • Stage 2: Apply d permutations taking {0… 2b+f-1} {0… 2b+f-1} • Bucket Biand fingerprint Fi for ith subtable given by ith permtuation. • Also, Bi and Fi completely determine H(x).

  49. Handling the Deletion Problem • Lemma: if x and y yield the same fingerprint in the same bucket, then H(x) = H(y). • Proof: because of permutation setup, fingerprint and bucket determine H(x). • Each cell has a small counter. • In case two elements have same hash, H(x) = H(y). • Note they would match for all buckets/fingerprints. • 2 bit counters generally suffice. • Deletion problem avoided. • Can’t have two fingerprints for x in the table at the same time; handled by the counter.

  50. A Problem for Analysis • Permutations implies no longer “pure” d-left hashing. • Dependence. • Analysis no longer applies. • Some justification: • Balanced Allocation on Graphs (SODA 2006, Kenthapadi and Panigrahy.) • Differential equations. • Justified experimentally.

More Related