1 / 25

SILT: A Memory-Efficient, High-Performance Key-Value Store

SILT: A Memory-Efficient, High-Performance Key-Value Store. Hyeontaek Lim , Bin Fan, David G. Andersen Michael Kaminsky † Carnegie Mellon University †Intel Labs. 2011-10-24. Key-Value Store. Clients. Key-Value Store Cluster. PUT(key, value) value = GET(key) DELETE(key).

metta
Download Presentation

SILT: A Memory-Efficient, High-Performance Key-Value Store

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. SILT: A Memory-Efficient,High-Performance Key-Value Store Hyeontaek Lim, Bin Fan, David G. AndersenMichael Kaminsky† Carnegie Mellon University †Intel Labs 2011-10-24

  2. Key-Value Store Clients Key-Value StoreCluster PUT(key, value) value = GET(key) DELETE(key) • E-commerce (Amazon) • Web server acceleration (Memcached) • Data deduplication indexes • Photo storage (Facebook)

  3. Many projects have examined flash memory-based key-value stores • Faster than disk, cheaper than DRAM • This talk will introduce SILT,which uses drastically less memory than previous systemswhile retaining high performance.

  4. Flash Must be Used Carefully Fast, but not THAT fast Space is precious Another long-standing problem: random writes are slow and bad for flash life (wearout)

  5. DRAM Must be Used Efficiently DRAM used for index (locate) items on flash 1 TB of data to store on flash 4 bytes of DRAM for key-value pair (previous state-of-the-art) 32 B: Data deduplication => 125 GB! 168 B: Tweet => 24 GB Index size (GB) 1 KB: Small image => 4 GB Key-value pair size (bytes)

  6. Three Metrics to Minimize Memory overhead = Index size per entry • Ideally 0 (no memory overhead) Read amplification = Flash reads per query • Limits query throughput • Ideally 1 (no wasted flash reads) Write amplification = Flash writes per entry • Limits insert throughput • Also reduces flash life expectancy • Must be small enough for flash to last a few years

  7. Landscape: Where We Were Read amplification SkimpyStash HashCache BufferHash FlashStore FAWN-DS ? Memory overhead (bytes/entry)

  8. Seesaw Game? FAWN-DS How can we improve? FlashStore HashCache SkimpyStash BufferHash Memory efficiency High performance

  9. Solution Preview: (1) Three Stores with (2) New Index Data Structures Queries look up stores in sequence (from new to old) Inserts only go to Log Data are moved in background SILT Log Index (Write friendly) SILT Sorted Index (Memory efficient) SILT Filter Memory Flash

  10. LogStore: No Control over Data Layout Naive Hashtable (48+ B/entry) SILT Log Index (6.5+ B/entry) Still need pointers:size ≥ log N bits/entry Memory Flash Inserted entries are appended (Older) (Newer) On-flash log Memory overhead Write amplification 6.5+ bytes/entry 1

  11. SortedStore: Space-Optimized Layout SILT Sorted Index (0.4 B/entry) Memory Flash Need to perform bulk-insert to amortize cost On-flash sorted array Memory overhead Write amplification 0.4 bytes/entry High

  12. Combining SortedStore and LogStore <SortedStore> <LogStore> SILT Sorted Index SILT Log Index Merge On-flash sorted array On-flash log

  13. Achieving both Low Memory Overhead and Low Write Amplification • Low memory overhead • High write amplification SortedStore • High memory overhead • Low write amplification LogStore SortedStore LogStore Now we can achieve simultaneously: Write amplification = 5.4 = 3 year flash life Memory overhead = 1.3 B/entry With “HashStores”, memory overhead = 0.7 B/entry!(see paper)

  14. SILT’s Design (Recap) <SortedStore> <HashStore> <LogStore> SILT Sorted Index SILT Filter SILT Log Index Merge Conversion On-flash sorted array On-flash hashtables On-flash log Memory overhead Read amplification Write amplification 0.7 bytes/entry 1.01 5.4

  15. Review on New Index Data Structures in SILT SILT Sorted Index SILT Filter & Log Index Entropy-coded tries Partial-key cuckoo hashing For SortedStore Highly compressed (0.4 B/entry) For HashStore & LogStore Compact (2.2 & 6.5 B/entry) Very fast (> 1.8 M lookups/sec)

  16. Compression in Entropy-Coded Tries 0 1 0 1 0 1 0 1 0 1 0 1 0 1 • Hashed keys (bits are random) • # red (or blue) leaves ~ Binomial(# all leaves, 0.5) • Entropy coding (Huffman coding and more) • (More details of the new indexing schemes in paper)

  17. Landscape: Where We Are Read amplification SkimpyStash HashCache BufferHash FlashStore FAWN-DS SILT Memory overhead (bytes/entry)

  18. Evaluation Various combinations of indexing schemes Background operations (merge/conversion) Query latency

  19. LogStore Alone: Too Much Memory Workload: 90% GET (50-100 M keys) + 10% PUT (50 M keys)

  20. LogStore+SortedStore: Still Much Memory Workload: 90% GET (50-100 M keys) + 10% PUT (50 M keys)

  21. Full SILT: Very Memory Efficient Workload: 90% GET (50-100 M keys) + 10% PUT (50 M keys)

  22. Small Impact from Background Operations Workload: 90% GET (100~ M keys) + 10% PUT 40 K Oops! burstyTRIM by ext4 FS 33 K

  23. Low Query Latency Workload: 100% GET (100 M keys) Best tput @ 16 threads Median = 330μs 99.9 = 1510μs # of I/O threads

  24. Conclusion • SILT provides bothmemory-efficient andhigh-performance key-value store • Multi-store approach • Entropy-coded tries • Partial-key cuckoo hashing • Full source code is available • https://github.com/silt/silt

  25. Thanks!

More Related