Hashing and packet level algorithms
Download
1 / 61

- PowerPoint PPT Presentation


  • 155 Views
  • Updated On :

Hashing and Packet Level Algorithms. Michael Mitzenmacher. Goals of the Talk. Consider algorithms/data structures for measurement/monitoring schemes at the router level. Focus on packets, flows. Emphasis on my recent work, future plans. “Applied theory”.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about '' - sherine


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
Hashing and packet level algorithms l.jpg

Hashing andPacket Level Algorithms

Michael Mitzenmacher


Goals of the talk l.jpg
Goals of the Talk

  • Consider algorithms/data structures for measurement/monitoring schemes at the router level.

    • Focus on packets, flows.

  • Emphasis on my recent work, future plans.

    • “Applied theory”.

      • Less on experiments, more on design/analysis of data structures for applications.

    • Hash-based schemes

      • Bloom filters and variants.


Vision l.jpg
Vision

  • Three-pronged research data.

  • Low: Efficient hardware implementations of relevant algorithms and data structures.

  • Medium: New, improved data structures and algorithms for old and new applications.

  • High: Distributed infrastructure supporting monitoring and measurement schemes.


Background building blocks l.jpg
Background / Building Blocks

  • Multiple-choice hashing

  • Bloom filters


Multiple choices d left hashing l.jpg
Multiple Choices: d-left Hashing

  • Split hash table into d equal subtables.

  • To insert, choose a bucket uniformly for each subtable.

  • Place item in a cell in the least loaded bucket, breaking ties to the left.


Properties of d left hashing l.jpg
Properties of d-left Hashing

  • Analyzable using both combinatorial methods and differential equations.

    • Maximum load very small: O(log log n).

    • Differential equations give very, very accurate performance estimates.

  • Maximum load is extremely close to average load for small values of d.


Example of d left hashing l.jpg
Example of d-left hashing

Average load 6.4

  • Consider 3-left performance.

Average load 4


Example of d left hashing8 l.jpg
Example of d-left hashing

  • Consider 4-left performance with average load of 6, using differential equations.

Alternating insertions/deletions

Steady state

Insertions only


Review bloom filters l.jpg
Review: Bloom Filters

  • Given a set S = {x1,x2,x3,…xn} on a universe U, want to answer queries of the form:

  • Bloom filter provides an answer in

    • “Constant” time (time to hash).

    • Small amount of space.

    • But with some probability of being wrong.

  • Alternative to hashing with interesting tradeoffs.


Bloom filters l.jpg

B

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

B

0

1

0

0

1

0

1

0

0

1

1

1

0

1

1

0

B

0

1

0

0

1

0

1

0

0

1

1

1

0

1

1

0

B

0

1

0

0

1

0

1

0

0

1

1

1

0

1

1

0

Bloom Filters

Start with an m bit array, filled with 0s.

Hash each item xjin S k times. If Hi(xj) = a, set B[a] = 1.

To check if y is in S, check B at Hi(y). All k values must be 1.

Possible to have a false positive; all k values are 1, but y is not in S.

n items m= cn bits k hash functions


False positive probability l.jpg
False Positive Probability

  • Pr(specific bit of filter is 0) is

  • If r is fraction of 0 bits in the filter then false positive probability is

  • Approximations valid as r is concentrated around E[r].

    • Martingale argument suffices.

  • Find optimal at k = (ln 2)m/n by calculus.

    • So optimal fpp is about (0.6185)m/n

n items m= cn bits k hash functions


Example l.jpg
Example

m/n = 8

Opt k = 8 ln 2 = 5.45...

n items m= cn bits k hash functions


Handling deletions l.jpg
Handling Deletions

  • Bloom filters can handle insertions, but not deletions.

  • If deleting xi means resetting 1s to 0s, then deleting xi will “delete” xj.

xixj

B

0

1

0

0

1

0

1

0

0

1

1

1

0

1

1

0


Counting bloom filters l.jpg

B

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

B

B

B

0

0

0

3

2

1

0

0

0

0

0

0

0

1

0

0

0

0

1

2

2

0

0

0

0

0

0

3

1

3

2

2

1

1

1

1

0

0

0

2

1

1

1

1

1

0

0

0

Counting Bloom Filters

Start with an m bit array, filled with 0s.

Hash each item xjin S k times. If Hi(xj) = a, add 1 to B[a].

To delete xjdecrement the corresponding counters.

Can obtain a corresponding Bloom filter by reducing to 0/1.


Counting bloom filters overflow l.jpg
Counting Bloom Filters: Overflow

  • Must choose counters large enough to avoid overflow.

  • Poisson approximation suggests 4 bits/counter.

    • Average load using k = (ln 2)m/n counters is ln 2.

    • Probability a counter has load at least 16:

  • Failsafes possible.

  • We assume 4 bits/counter for comparisons.


Bloomier filters l.jpg
Bloomier Filters

  • Instead of set membership, keep an r-bit function value for each set element.

    • Correct value should be given for each set element.

    • Non-set elements should return NULL with high probability.

  • Mutable version: function values can change.

    • But underlying set can not.

  • First suggested in paper by Chazelle, Kilian, Rubenfeld, Tal.


From low to high l.jpg
From Low to High

  • Low

    • Hash Tables for Hardware

    • New Bloom Filter/Counting Bloom Filter Constructions (Hardware Friendly)

  • Medium

    • Approximate Concurrent State Machines

  • High

    • A Distributed Hashing Infrastructure

    • Why do Weak Hash Functions Work So Well?


Low level better hash tables for hardware l.jpg
Low Level : Better Hash Tables for Hardware

  • Joint work with Adam Kirsch.

    • Simple Summaries for Hashing with Choices.

    • The Power of One Move: Hashing Schemes for Hardware.


Perfect hashing approach l.jpg
Perfect Hashing Approach

Element 1

Element 2

Element 3

Element 4

Element 5

Fingerprint(4)

Fingerprint(5)

Fingerprint(2)

Fingerprint(1)

Fingerprint(3)


Near perfect hash functions l.jpg
Near-Perfect Hash Functions

  • Perfect hash functions are challenging.

    • Require all the data up front – no insertions or deletions.

    • Hard to find efficiently in hardware.

  • In [BM96], we note that d-left hashing can give near-perfect hash functions.

    • Useful even with insertions, deletions.

    • Some loss in space efficiency.


Near perfect hash functions via d left hashing l.jpg
Near-Perfect Hash Functions via d-left Hashing

  • Maximum load equals 1

    • Requires significant space to avoid all collisions, or some small fraction of spillovers.

  • Maximum load greater than 1

    • Multiple buckets must be checked, and multiple cells in a bucket must be checked.

    • Not perfect in space usage.

      • In practice, 75% space usage is very easy.

      • In theory, can do even better.


Hash table design example l.jpg
Hash Table Design : Example

  • Desired goals:

    • At most 1 item per bucket.

    • Minimize space.

      • And minimize number of hash functions.

    • Small amount of spillover possible.

      • We model as a constant fraction, e.g. 0.2%.

      • Can be placed in a content-addressable memory (CAM) if small enough.


Basic d left scheme l.jpg
Basic d-left Scheme

  • For hash table holding up to n elements, with max load 1 per bucket, use 4 choices and 2n cells.

    • Spillover of approximately 0.002n elements into CAM.


Improvements from skew l.jpg

xk

Improvements from Skew

  • For hash table holding up to n elements, with max load 1 per bucket, use 4 choices and 1.8n cells.

    • Subtable sizes 0.79n, 0.51n, 0.32n, 0.18n.

    • Spillover still approximately 0.002n elements into CAM.

    • Subtable sizes optimized using differential equations, black-box optimization.


Summaries to avoid lookups l.jpg
Summaries to Avoid Lookups

  • In hardware, d choices of location can be done by parallelization.

    • Look at d memory banks in parallel.

  • But there’s still a cost: pin count.

  • Can we keep track of which hash function to use for each item, using a small summary?

    • Yes: use a Bloom-filter like structure to track.

      • Skew impacts summary performance; more skew better.

    • Uses small amount of on-chip memory.

    • Avoids multiple look-ups.

    • Special case of a Bloomier filter.


Hash tables with moves l.jpg
Hash Tables with Moves

  • Cuckoo Hashing (Pagh, Rodler)

    • Hashed items need not stay in their initial place.

    • With multiple choices, can move item to another choice, without affecting lookups.

      • As long as hash values can be recomputed.

    • When inserting, if all spots are filled, new item kicks out an old item, which looks for another spot, and might kick out another item, and so on.


Benefits and problems of moves l.jpg
Benefits and Problems of Moves

  • Benefit: much better space utilization.

    • Multiple choices, multiple items per bucket, can achieve 90+% with no spillover.

  • Drawback: complexity.

    • Moves required can grow like log n.

      • Constant on average.

    • Bounded maximum time per operation important in many settings.

    • Moves expensive.

      • Table usually in slow memory.


Question power of one move l.jpg
Question : Power of One Move

  • How much leverage do we get by just allowing one move?

    • One move likely to be possible in practice.

    • Simple for hardware.

    • Analysis possible via differential equations.

      • Cuckoo hard to analyze.

    • Downside : some spillover into CAM.


Comparison insertions only l.jpg
Comparison, Insertions Only

  • 4 schemes

    • No moves

    • Conservative : Place item if possible. If not, try to move earliest item that has not already replaced another item to make room. Otherwise spill over.

    • Second chance : Read all possible locations, and for each location with an item, check it it can be placed in the next subtable. Place new item as early as possible, moving up to 1 item left 1 level.

    • Second chance, with 2 per bucket.

  • Target of 0.2% spillover.

  • Balanced (all subtables the same) and skewed compared.

  • All done by differential equation analysis (and simulations match).



Conclusions moves l.jpg
Conclusions, Moves

  • Even one move saves significant space.

    • More aggressive schemes, considering all possible single moves, save even more. (Harder to analyze, more hardware resources.)

  • Importance of allowing small amounts of spillover in practical settings.


Future work l.jpg
Future Work

  • This analysis was for insertions only.

  • Lots more space required in case of deletions.

    • Different behavior in steady state.

  • More moves may be required.

  • Examining possible implementations.

    • With Adam Kirsch, to appear in Allerton.


From low to high33 l.jpg
From Low to High

  • Low

    • Hash Tables for Hardware

    • New Bloom Filter/Counting Bloom Filter Constructions (Hardware Friendly)

  • Medium

    • Approximate Concurrent State Machines

  • High

    • A Distributed Hashing Infrastructure

    • Why do Weak Hash Functions Work So Well?


Low medium new bloom filters counting bloom filters l.jpg
Low- Medium: New Bloom Filters / Counting Bloom Filters

  • Joint work with Flavio Bonomi, Rina Panigrahy, Sushil Singh, George Varghese.


A new approach to bloom filters l.jpg
A New Approach to Bloom Filters

  • Folklore Bloom filter construction.

    • Recall: Given a set S = {x1,x2,x3,…xn} on a universe U, want to answer membership queries.

    • Method: Find an n-cell perfect hash function for S.

      • Maps set of n elements to n cells in a 1-1 manner.

    • Then keep bit fingerprint of item in each cell. Lookups have false positive <e.

    • Advantage: each bit/item reduces false positives by a factor of 1/2, vs ln 2 for a standard Bloom filter.

  • Negatives:

    • Perfect hash functions non-trivial to find.

    • Cannot handle on-line insertions.


Near perfect hash functions36 l.jpg
Near-Perfect Hash Functions

  • In [BM96], we note that d-left hashing can give near-perfect hash functions.

    • Useful even with deletions.

  • Main differences

    • Multiple buckets must be checked, and multiple cells in a bucket must be checked.

    • Not perfect in space usage.

      • In practice, 75% space usage is very easy.

      • In theory, can do even better.


First design just d left hashing l.jpg
First Design : Just d-left Hashing

  • For a Bloom filter with n elements, use a 3-left hash table with average load 4, 60 bits per bucket divided into 6 fixed-size fingerprints of 10 bits.

    • Overflow rare, can be ignored.

  • False positive rate of

    • Vs. 0.000744 for a standard Bloom filter.

  • Problem:Too much empty, wasted space.

    • Other parametrizations similarly impractical.

    • Need to avoid wasting space.


Just hashing picture l.jpg
Just Hashing : Picture

Empty

Empty

Bucket

0000111111

1010101000

0001110101

1011011100


Key dynamic bit reassignment l.jpg
Key: Dynamic Bit Reassignment

  • Use 64-bit buckets: 4 bit counter, 60 bits divided equally among actual fingerprints.

    • Fingerprint size depends on bucket load.

  • False positive rate of 0.0008937

    • Vs. 0.0004587 for a standard Bloom filter.

  • DBR: Within a factor of 2.

    • And would be better for larger buckets.

    • But 64 bits is a nice bucket size for hardware.

  • Can we remove the cost of the counter?


Dbr picture l.jpg
DBR : Picture

000110110101

111010100001

101010101000

101010110101

010101101011

Bucket

Count : 4


Semi sorting l.jpg
Semi-Sorting

  • Fingerprints in bucket can be in any order.

    • Semi-sorting: keep sorted by first bit.

  • Use counter to track #fingerprints and #fingerprints starting with 0.

  • First bit can then be erased, implicitly given by counter info.

  • Can extend to first two bits (or more) but added complexity.


Dbr semi sorting picture l.jpg
DBR + Semi-sorting : Picture

000110110101

111010100001

101010101000

101010110101

010101101011

Bucket

Count : 4,2


Dbr semi sorting results l.jpg
DBR + Semi-Sorting Results

  • Using 64-bit buckets, 4 bit counter.

    • Semi-sorting on loads 4 and 5.

    • Counter only handles up to load 6.

    • False positive rate of 0.0004477

      • Vs. 0.0004587 for a standard Bloom filter.

    • This is the tradeoff point.

  • Using 128-bit buckets, 8 bit counter, 3-left hash table with average load 6.4.

    • Semi-sorting all loads: fpr of 0.00004529

    • 2 bit semi-sorting for loads 6/7: fpr of 0.00002425

      • Vs. 0.00006713 for a standard Bloom filter.


Additional issues l.jpg
Additional Issues

  • Futher possible improvements

    • Group buckets to form super-buckets that share bits.

    • Conjecture: Most further improvements are not worth it in terms of implementation cost.

  • Moving items for better balance?

  • Underloaded case.

    • New structure maintains good performance.


Improvements to counting bloom filter l.jpg
Improvements to Counting Bloom Filter

  • Similar ideas can be used to develop an improved Counting Bloom Filter structure.

    • Same idea: use fingerprints and a d-left hash table.

  • Counting Bloom Filters waste lots of space.

    • Lots of bits to record counts of 0.

  • Our structure beats standard CBFs easily, by factors of 2 or more in space.

    • Even without dynamic bit reassignment.


From low to high46 l.jpg
From Low to High

  • Low

    • Hash Tables for Hardware

    • New Bloom Filter/Counting Bloom Filter Constructions (Hardware Friendly)

  • Medium

    • Approximate Concurrent State Machines

  • High

    • A Distributed Hashing Infrastructure

    • Why do Weak Hash Functions Work So Well?


Approximate concurrent state machines l.jpg
Approximate Concurrent State Machines

  • Joint work with Flavio Bonomi, Rina Panigrahy, Sushil Singh, George Varghese.

  • Extending the Bloomier filter idea to handle dynamic sets, dynamic function values, in practical setting.


Approximate concurrent state machines48 l.jpg
Approximate ConcurrentState Machines

  • Model for ACSMs

    • We have underlying state machine, states 1…X.

    • Lots of concurrent flows.

    • Want to track state per flow.

    • Dynamic: Need to insert new flows and delete terminating flows.

    • Can allow some errors.

    • Space, hardware-level simplicity are key.


Motivation router state problem l.jpg
Motivation: Router State Problem

  • Suppose each flow has a state to be tracked. Applications:

    • Intrusion detection

    • Quality of service

    • Distinguishing P2P traffic

    • Video congestion control

    • Potentially, lots of others!

  • Want to track state for each flow.

    • But compactly; routers have small space.

    • Flow IDs can be ~100 bits. Can’t keep a big lookup table for hundreds of thousands or millions of flows!


Problems to be dealt with l.jpg
Problems to Be Dealt With

  • Keeping state values with small space, small probability of errors.

  • Handling deletions.

  • Graceful reaction to adversarial/erroneous behavior.

    • Invalid transitions.

    • Non-terminating flows.

      • Could fill structure if not eventually removed.

    • Useful to consider data structures in well-behaved systems and ill-behaved systems.


Summary l.jpg
Summary

  • We have an ACSM design.

    • Similar to new Bloom filter design.

    • ACSM design came first!

  • ACSM performance seems reasonable:

    • Sub 1% error rates with reasonable size.


From low to high52 l.jpg
From Low to High

  • Low

    • Hash Tables for Hardware

    • New Bloom Filter/Counting Bloom Filter Constructions (Hardware Friendly)

  • Medium

    • Approximate Concurrent State Machines

  • High

    • A Distributed Hashing Infrastructure

    • Why do Weak Hash Functions Work So Well?


A distributed router infrastructure l.jpg
A Distributed Router Infrastructure

  • Recently funded FIND proposal.

  • Looking for ideas/collaborators.


The high level pitch l.jpg
The High-Level Pitch

  • Lots of hash-based schemes being designed for approximate measurement/monitoring tasks.

    • But not built into the system to begin with.

  • Want a flexible router architecture that allows:

    • New methods to be easily added.

    • Distributed cooperation using such schemes.


What we need l.jpg
What We Need

Off-Chip

Memory

On-Chip

Memory

CAM(s)

Memory

Hashing

Computation

Unit

Unit for

Other

Computation

Programming

Language

Computation

Control

System

Communication

Architecture

Communication

+ Control


Lots of design questions l.jpg
Lots of Design Questions

  • How much space for various memory levels? How can we dynamically divide memory among multiple competing applications?

  • What hash functions should be included? How open should system be to new hash functions?

  • What programming functionality should be included? What programming language to use?

  • What communication is necessary to achieve distributed monitoring tasks given the architecture?

  • Should security be a consideration? What security approaches are possible?

  • And so on…


Which hash functions l.jpg
Which Hash Functions?

  • Theorists:

    • Want hash functions with analyzable properties.

    • Dislike assuming fully random hash functions.

      • Which we have done!

    • But often what you can prove doesn’t match actual performance.

  • Practitioners:

    • Want easily implementable, fast hash functions.

      • Space and speed important!

    • Want simple analysis.

    • Generally accept simulated behavior.

      • But possible danger!!!


Why do weak hash functions work so well l.jpg
Why Do Weak Hash Functions Work So Well?

  • In reality, assuming perfectly random hash functions seems to be the right thing to do.

    • Easier to analyze.

    • Real systems almost always work that way, even with weak hash functions!

  • Can Theory explain strong performance of weak hash functions?


Recent work l.jpg
Recent Work

  • A new explanation:

    • Joint work with Salil Vadhan.

  • Choosing a hash function from a pairwise independent family is enough – if data has sufficient entropy.

    • Randomness of hash function and data “combine”.

    • Behavior matches truly random hash function with high probability.

  • Techniques based on theory of randomness extraction.

    • Leftover Hash Lemma, extensions.


Sample results and implications l.jpg
Sample Results and Implications

  • Consider input data of n items as a stream, {X1,X2,…,Xn} of random variables.

    • Let collision probability

    • Suppose , is close to that of a uniform distribution.

    • Then hashed data is close to uniform.

  • Implications: for d-left hashing, Bloom filters, linear hashing, etc. choosing a hash function from a pairwise independent family should behave like the perfect analysis, if the data has enough entropy.


Conclusions and future work l.jpg
Conclusions and Future Work

  • Low: Mapping current hashing techniques to hardware is fruitful for practice.

  • Medium: Big boom in hashing-based algorithms/data structures. Trend is likely to continue.

    • Approximate concurrent state machines: Natural progression from set membership to functions (Bloomier filter) to state machines. What is next?

    • Power of d-left hashing variants for near-perfect matchings.

  • High: Wide open. Need to systematize our knowledge for next generation systems.

    • Measurement and monitoring infrastructure built into the system.


ad