1 / 45

Phase Change Memory Aware Data Management and Application

Phase Change Memory Aware Data Management and Application. Jiangtao Wang. Outline. Introduction Integrating PCM into the Memory Hierarchy PCM for main memory PCM for auxiliary memory Conclusion. Phase change memory. An emerging memory technology Memory(DRAM)

zona
Download Presentation

Phase Change Memory Aware Data Management and Application

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Phase Change Memory Aware Data Management and Application Jiangtao Wang

  2. Outline • Introduction • Integrating PCM into the Memory Hierarchy • PCM for main memory • PCM for auxiliary memory • Conclusion

  3. Phase change memory • An emerging memory technology • Memory(DRAM) • Read/write speeds and Byte-addressable • Lower Idle power • Storage(SSD & HDD) • Non-volatile • high capacity (high density)

  4. Phase change memory • Cons: • Asymmetry read/write latency • Limited write endurance

  5. Phase change memory Read operation FLASH DRAM PCM HDD DRAM FLASH HDD 100ns 1us 10us 1ms 10ms 100us 10ns PCM Write operation

  6. Outline • Introduction • Integrating PCM into the Memory Hierarchy • PCM for main memory • PCM for auxiliary memory • Conclusion

  7. Integrating PCM into the Memory Hierarchy • PCM for main memory • Replacing DRAM with PCM to achieve larger main memory capacity • PCM for auxiliary memory • PCM as a write buffer for HDD/SSD DISK Buffering dirty page to minimize the disk write I/Os • PCM as secondary storage Storing log records

  8. [ISCA’09] [ICCD’11] [DAC’09] [CIDR’11] PCM for main memory CPU CPU CPU L1/L2 Cache L1/L2 Cache L1/L2 Cache Memory Controller Memory Controller Memory Controller DRAM Write buffer DRAM Cache Phase Change Memory Phase Change Memory Phase Change Memory HDD/SSD Disk HDD/SSD Disk HDD/SSD Disk (c)DRAM as a write buffer (b)DRAM as a cache memory (a)PCM-only memory

  9. PCM for main memoryChallenges with PCM • Major disadvantage – Writes Compared to read operation ,PCM writes incur higher energy consumption、higher latency and limited endurance Reducing PCM writes is an important goal of data management on PCM !

  10. PCM for main memoryOptimization on PCM write [ISCAS’07] [ISCA’09] [MICRO’09] • Optimization: data comparison write • Goal: write only modified bits rather than entire cache line • Approach: read-compare-write CPU cache read PCM 0 0 0 1 1 1 0 0 0 1 1 1 1 1 1 0 0 0 1 1 0 1 1 0 0 0 0 1 1 1 1 1 1 0 0 0 1 1 1 0 1 1 1 1 1 0 1 0 0 1 0 1 1 1 0 0 0 1 0 1 0 1 1 0 1 1 0 1 1 1 0 1

  11. PCM for main memory PCM-friendly algorithms Rethinking Database Algorithms for Phase Change Memory(CIDR2011) • Motivation Choosing PCM-friendly database algorithms and data structures to reduce the number of writes

  12. PCM for main memory PCM-friendly DB algorithms • Prior design goals for DRAM • Low computation complexity • Good CPU cache performance • Power efficiency (more recently) • New goal for PCM • minimizing PCM writes • Low wear , energy and latency • Finer-grained access granularity:bits,words,cache line • Two core database techniques • B+-Tree Index • Hash Joins

  13. PCM-friendly DB algorithmsB+-Tree Index • B+ -Tree • Records at leaf nodes • High fan out • Suitable for file systems • For PCM • Insertion/deletion incur a lot of write operations • K keys and K pointers in a node: 2(K/2)+1=K+1 Insert 3 incurs 11 writes pointers pointers

  14. PCM-friendly DB algorithmsB+-Tree Index • PCM-friendly B+-Tree • Unsorted: all the non-leaf and leaf nodes unsorted • Unsorted leaf: sorted non-leaf and unsorted leaf • Unsorted leaf with bitmap :sorted non-leaf and unsorted leaf with bitmaps pointers pointers Unsorted leaf node Unsorted leaf node with bitmap

  15. PCM-friendly DB algorithmsB+-Tree Index • Unsorted leaf • Insert/delete incurs 3 writes Delete 2 • Unsorted leaf with bitmap • Insert incurs 3 writes; delete incurs 1 write Delete 2 pointers pointers pointers pointers

  16. Experimental evaluation B+-Tree Index • Simulation Platform • Cycle-accurate X86-64 simulator: PTLSim • Extended the simulator with PCM support • Modeled data comparison write • CPU cache(8MB), B+-Tree (50 million entrys,75% full,1GB) • Three workloads: • Inserting 500K random keys • Deleting 500K random keys • Searching 500K random keys

  17. Experimental evaluation B+-Tree Index • Node size 8 cache lines; 50 million entries, 75% full; Energy Execution time Total wear • Unsorted schemes achieve the best performance • For insert intensive workload: unsorted-leaf • For insert & delete intensive workload : unsorted-leaf with bitmap

  18. PCM-friendly DB algorithmsHash Joins • Simple Hash Join • Two representative algorithms • Simple Hash Join • Cache Partitioning Hash Table S R Probe Relation Build Phase • Problem – too many cache misses • Build and probe hash table(exceeds CPU cache size) • Small record size

  19. PCM-friendly DB algorithmsHash Joins • Cache Partitioning Join Phase R1 S1 R S R2 S2 Partition Phase Partition Phase R3 S3 R4 S4 • Problems :Too many writes!

  20. PCM-friendly DB algorithmsHash Joins • Virtual Partitioning(PCM-friendly DB algorithms) Partition phase S’1 R’1 S S’1 R Virtual Partitioning Virtual partitioning R’2 S’1 R’3 S’1 R’4 Store record ID

  21. PCM-friendly DB algorithmsHash Joins • Virtual Partitioning (PCM-friendly DB algorithms) Join phase Hash table S’1 R’1 Build Probe R S • Good CPU cache performance • Reducing writes

  22. Experimental evaluation Hash Join • Relations R and S are in main memory(PCM) • R(50MB) joins S(100MB) (2 matches per R record) • Varying record size from 20B to 100B Execution time Total wear PCM energy

  23. [DAC’09] [CIKM’11] [TCDE’10] [VLDB’11] PCM for auxiliary memory CPU CPU L1/L2 Cache L1/L2 Cache Memory Controller DRAM DRAM PCM write buffer SSD/HDD PCM HDD/SSD Disk PCM as secondary storage PCM as a write buffer for HDD/SSD DISK

  24. PCM for auxiliary memory • PCM as a write buffer for HDD/SSD DISK • PCMLogging: Reducing Transaction Logging Overhead with PCM(CIKM2011) • PCM as secondary storage • Accelerating In-Page Logging with Non-Volatile Memory(TCDE2010) • IPL-P: In-Page Logging with PCRAM (VLDB2011 demo)

  25. PCM for auxiliary memory • Motivation Buffering dirty page and transaction logging to minimize disk I/Os • PCM as a write buffer for HDD/SSD DISK • PCMLogging: Reducing Transaction Logging Overhead with PCM(CIKM2011)

  26. PCMBasic PCM for auxiliary memory • PCMBasic • Two schemes • PCMBasic • PCMLogging DRAM Dirty pages Write log PCM • Cons: • Data redundancy • Space management on PCM Buffer pool Log pool DISK

  27. PCMLogging • PCMLogging • Eliminate explicit logs (REDO and UNDO log) • Integrate implicit logs into buffered updated(shadow pages) DRAM P MetaData P1 P1 P1 MetaData … P2 P2 PCM DISK

  28. PCMLogging • Overview • DRAM • Mapping Table(MT):map logial page to physical page • PCM • Page format • FreePageBitmap • ActiveTxList

  29. PCMLogging • Overview

  30. PCMLogging • PCMLogging Operation Two additional data structures in the main memory to support undo memory • Transaction Table(TT) Record all in-progress transaction and their corresponding dirty pages in DRAM and PCM • Dirty Page Table(DPT) Keep track of the previous version for each PCM “overwritten” by an in-progress transaction

  31. PCMLogging • Flushing Dirty pages to PCM • Add XID to ActiveTxList before writing dirty page to PCM • If page P exists in the PCM, do not overwrite and create an out-of-place P’ T3 update P5

  32. PCMLogging • Commit • flush all its dirty pages • Modify metadata: • Abort • discard its dirty pages and restore previous data • Modify metadata:

  33. PCMLogging • Tuple-based Buffering • In the PCM • the buffer slots be managed in the unit of tuples, • To manage the free space, employ a slotted directory instead of a bitmap • In the DRAM • Mapping Table, we still keep track of dirty pages, but maintain the mappings for the buffered tuples in each dirty page • Merge tuples with the corresponding page of the disk • read/write request • move committed tuples from PCM to the external disk

  34. Experimental evaluation • Simulator based on DiskSim • TPC-C benchmark • DRAM 64MB • Tuple-based (PL=PCMLogging)

  35. PCM for auxiliary memory • PCM as secondary storage • Accelerating In-Page Logging with Non-Volatile Memory(TCDE2010) • IPL-P: In-Page Logging with PCRAM (VLDB2011 demo) • Motivation • IPL scheme with PCRAM can improve the performance of flash memory database systems by storing frequent log records in PCRAM Design of Flash-Based DBMS: An In-Page Logging Approach(SIGMOD2007)

  36. In-Page Logging • Introduction • Updating a single record may result in invalidating the current page • Sequential logging approaches incur expensive merge operation • Co-locate a data page and its log records in the same physical block Design of Flash-Based DBMS: An In-Page Logging Approach(SIGMOD2007)

  37. In-Page Logging Database Buffer update in-memory data page (8K) in-memory log sector (512B) log log log Flash Memory … 15 data pages Physical block(128K) log log Log region(8K) 16 sectors(512B) … log log

  38. In-Page Logging Database Buffer update in-memory data page (8K) in-memory log sector (512B) log log log Flash Memory … merge log + … … log log log … log

  39. In-Page Logging • Cons • Units of write log is a sector(512B) • Only SLC-type NAND flash supports partial programming • The amount of log records for a page is usually small

  40. In-Page Logging • Pros • log records can be flushed in a finer granularity • the low latency of flushing log records • PCRAM is faster than flash memory for small reads • SLC or MLC flash memory can be used for IPL policy.

  41. Experimental evaluation • Accelerating In-Page Logging with Non-Volatile Memory(TCDE2010) • A trace-driven simulation • Implement an IPL module to the B+-tree based Berkeley DB • Million key-value records insert/search • Log sector in memory(128B/512B)

  42. Experimental evaluation • IPL-P: In-Page Logging with PCRAM (VLDB2011 demo) • Hardware platform • PCRAM(512M,the granularity:128B) • Intel X25-M SSD (USB interface) • Workload • Million key-value records insert/search/update • B+-tree based Berkeley DB • Page size :8KB

  43. Outline • Introduction • Integrating PCM into the Memory Hierarchy • PCM for main memory • PCM for auxiliary memory • Conclusion

  44. Conclusion • PCM is expected to play an important role in the memory hierarchy • It is important to consider read/write asymmetry of PCM when design PCM-friendly algorithms • Integrating PCM into Hybrid memory might be more practical • If we use PCM as main memory,we had to revise some system application(e.g. Main Memory Database Systems )to address PCM-specific challenges.

  45. Thank You!

More Related