1 / 53

Concurrent Cache-Oblivious B-trees Using Transactional Memory

Concurrent Cache-Oblivious B-trees Using Transactional Memory. Jim Sukha Bradley Kuszmaul MIT CSAIL June 10, 2006. Thought Experiment. Imagine that, one day, you are assigned the following task:.

Download Presentation

Concurrent Cache-Oblivious B-trees Using Transactional Memory

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Concurrent Cache-Oblivious B-trees Using Transactional Memory Jim Sukha Bradley Kuszmaul MIT CSAIL June 10, 2006

  2. Thought Experiment Imagine that, one day, you are assigned the following task: Enclosed is code for a serial, cache-oblivious B-tree. We want a reasonably efficient parallel implementation that works for disk-resident data. Attach:COB-tree.tar.gz PS. We want to be able to restore the data to a consistent state after a crash too. PPS. Our deadline is next week. Good luck!

  3. Concurrent COB-tree? Question: How can one program a concurrent, cache-oblivious B-tree? Approach: We employ transactional memory. What complications does I/O introduce?

  4. Potential Pitfalls Involving I/O Suppose our data structure resides on disk. • We might need to make explicit I/O calls to transfer blocks between memory and disk. But a cache-oblivious algorithm doesn’t know the block size B! • We might need buffer management code if the data doesn’t fit into main memory. • We might need to unroll I/O if we abort a transaction that has already written to disk.

  5. Our Solution: Libxac • We have implemented Libxac, a page-based transactional memory system that operates on disk-resident data. Libxac supports ACID transactions on a memory-mapped file. • Using Libxac, we are able to implement a complex data structure that operates on disk-resident data, e.g. a cache-oblivious B-tree.

  6. Libxac Handles Transaction I/O • We might need to make explicit I/O calls to transfer blocks between memory and disk. Similar to mmap, Libxac provides a function xMmap. Thus, we can operate on disk-resident data without knowing block size. • We might need buffer management code if the data doesn’t fit into main memory. Like mmap, the OS automatically buffers pages in memory. • We might need to unroll I/O if we abort a transaction that has already written to disk. Since Libxac implements multiversion concurrency control, we still have the original version of a page even if a transaction aborts.

  7. Outline • Programming with Libxac • Cache-Oblivious B-trees

  8. Runtime initialization function. For durable transactions, logs are stored in the specified directory.* Transactionally maps the first page of the input file. Transaction body. The body can be a complex function (e.g., a cache-oblivious B-tree insert!). Unmap the region. Shutdown runtime. * Currently Libxac logs the transaction commits, but we haven’t implemented the recovery program yet. Example Program with Libxac int main(void) { int* x; int status = FAILURE; xInit(“/logs”, DURABLE); x = xMmap(“input.db”, 4096); while (status != SUCCESS) { xbegin(); x[0] ++; status = xend(); } xMunmap(x); xShutdown(); return 0; }

  9. Libxac Memory Model • Aborted transactions are visible to the programmer (thus, programmer must explicitly retry transaction). Control flow always proceeds from xbegin() to xend(). Thus, the xaction body can contain system/library calls. • At xend(), all changes to xMmap’ed region are discarded on FAILURE, or committed on SUCCESS. • Aborted transactions always see consistent state. Read-only transactions can always succeed. int main(void) { int* x; int status = FAILURE; xInit(“/logs”, DURABLE); x = xMmap(“input.db”, 4096); while (status != SUCCESS) { xbegin(); x[0] ++; status = xend(); } xMunmap(x); xShutdown(); return 0; } *Libxac supports concurrent transactions on multiple processes, not threads.

  10. Implementation Sketch • Libxac detects memory accesses by using a SIGSEGV handler to catch a memory protection violation on a page that has been mmap’ed. • This mechanism is slow for normal transactions: • Time for mmap, SIGSEGV handler: ~ 10 ms • Efficient if we must perform disk I/O to log transaction commits. • Time to access disk: ~ 10 ms

  11. Is xMmap practical? Experiment on a 4-proc. AMD Opteron, performing 100,000 insertions of elements with random keys into a B-tree. Each insert is a separate transaction. Libxac and BDB both implement group commit. B-tree and COB-tree both use Libxac. Note that none of the three data structures have been properly tuned. Conclusion: We should achieve good performance.

  12. Outline • Programming with Libxac • Cache-Oblivious B-trees

  13. What is a Cache-Oblivious B-tree? • A cache-oblivious B-tree (e.g. [BDFC00]) is a dynamic dictionary data structure that supports searches, insertions/deletions, and range-queries. • An cache-oblivious algorithm/data structure does not know system parameters (e.g. the block size B.) • Theorem [FLPR99]: a cache-oblivious algorithm that is optimal for a two-level memory hierarchy is also optimal for a multi-level hierarchy.

  14. 31 1 -- 56 70 -- 54 39 13 -- 6 23 21 10 -- -- -- -- 38 59 83 48 45 40 4 24 -- 16 -- 7 15 -- Cache-Oblivious B-Tree Example Static Cache-Oblivious Tree 21 10 45 4 16 38 54 4 10 16 21 38 45 54 83 Packed Memory Array (PMA) • The COB-tree can be divided into two pieces: • A packed memory array that stores the data in order, but contains gaps. • A static cache-oblivious binary-tree that indexes the packed memory array.

  15. 6 -- 39 70 13 54 -- -- -- 56 23 31 21 10 -- 1 -- -- -- 59 83 48 40 45 4 38 -- -- 7 15 16 24 Cache-Oblivious B-Tree Insert Static Cache-Oblivious Tree 21 10 45 4 16 38 54 4 10 16 21 38 45 54 83 To insert a key of 37:

  16. 31 1 -- 56 70 -- 54 39 13 -- 6 23 21 10 -- -- -- -- 38 59 83 48 45 40 4 24 -- 16 -- 7 15 -- Cache-Oblivious B-Tree Insert Static Cache-Oblivious Tree 21 10 45 4 16 38 54 4 10 16 21 38 45 54 83 • To insert a key of 37: • Find correct section of PMA location using static tree. 37

  17. 31 1 -- 56 70 -- 54 39 13 -- 6 23 21 10 -- -- -- -- 38 59 83 48 45 40 4 24 -- 16 -- 7 15 -- Cache-Oblivious B-Tree Insert Static Cache-Oblivious Tree 21 10 45 4 16 38 54 4 10 16 21 38 45 54 83 • To insert a key of 37: • Find correct section of PMA location using static tree. • Insert into PMA. This step may cause a rebalance of the PMA. 37

  18. 6 -- 38 83 13 54 -- 40 45 59 23 31 21 10 -- 1 56 -- -- 70 -- 48 39 -- 4 37 -- -- 7 15 16 24 Cache-Oblivious B-Tree Insert Static Cache-Oblivious Tree 21 10 45 4 16 38 54 4 10 16 21 38 45 54 83 • To insert a key of 37: • Find correct section of PMA location using static tree. • Insert into PMA. This step possibly requires a rebalance. • Fix the static tree.

  19. 6 -- 38 83 13 54 -- 40 45 59 23 31 21 10 -- 1 56 -- -- 70 -- 48 39 -- 4 37 -- -- 7 15 16 24 Cache-Oblivious B-Tree Insert Static Cache-Oblivious Tree 21 10 40 4 16 37 56 4 10 16 21 37 40 56 83 • To insert a key of 37: • Find correct section of PMA location using static tree. • Insert into PMA. This step possibly requires a rebalance. • Fix the static tree.

  20. 6 -- 38 83 13 54 -- 40 45 59 23 31 21 10 -- 1 56 -- -- 70 -- 48 39 -- 4 37 -- -- 7 15 16 24 Cache-Oblivious B-Tree Insert Static Cache-Oblivious Tree 21 10 40 4 16 37 56 4 10 16 21 37 40 56 83 Insert is a complex operation. If we wanted to use locks, what is the locking protocol? What is the right (cache-oblivious?) lock granularity?

  21. Conclusions A page-based TM system such as Libxac • Represents a good match for disk-resident data structures. • The per-page overheads of TM are small compared to cost of I/O. • Is easy to program with. • Libxac allows us to program a concurrent, disk-resident data structure with ACID properties, as though it was stored in memory.

  22. Semantics of Local Variables int main(void) { int y=0, z=0, a=0, b=0; int* x; int status = FAILURE; xInit(“/logs”, DURABLE); x = xMmap(“input.db”, 4096); while (status != SUCCESS) { a++; xbegin(); b = x[0]; y++; x[0]++; z = x[0] – 1; status = xend(); } xMunmap(x); xShutdown(); return 0; } In this system, Libxac guarantees that after loop completes: a == y Value of a is # of times transaction is attempted. We always have b == z because aborted transactions always see consistent state, even if other programs concurrently access the first page of input.db.

  23. TM System Improvements? Possible improvements to Libxac: • Provide more efficient support for non-durable transactions by modifying the OS to track report pages accessed? • Integrate Libxac with another TM system to provide concurrency control on both multiple threads and multiple processes?

  24. 1 x[0] 7 x[1024] x[2048] x[3072] 2 9 Implementation Sketch PROT_NONE int a; xbegin(); a = x[0]; x[1024] += a+1; PROT_NONE xend(); PROT_NONE PROT_NONE Memory Map input.txt Buffer File Log File

  25. 1 x[0] 7 x[1024] x[2048] x[3072] 2 9 Implementation Sketch PROT_READ PROT_NONE int a; xbegin(); a = x[0]; x[1024] += a+1; PROT_NONE xend(); Segmentation Fault PROT_NONE PROT_NONE Memory Map input.txt Buffer File Log File

  26. 1 x[0] 7 x[1024] x[2048] x[3072] 2 9 Implementation Sketch PROT_READ PROT_NONE int a; xbegin(); a = x[0]; x[1024] += a+1; PROT_NONE PROT_READ xend(); Segmentation Fault PROT_NONE PROT_NONE Memory Map input.txt Buffer File Log File

  27. 1 x[0] 7 x[1024] x[2048] x[3072] 2 9 Implementation Sketch PROT_READ PROT_NONE int a; xbegin(); a = x[0]; x[1024] += a+1; PROT_NONE PROT_READ xend(); Segmentation Fault (2nd) PROT_NONE PROT_NONE Memory Map input.txt Buffer File Log File

  28. 1 x[0] 7 x[1024] x[2048] x[3072] 2 9 Implementation Sketch PROT_READ PROT_NONE int a; xbegin(); a = x[0]; x[1024] += a+1; PROT_READ xend(); Segmentation Fault (2nd) copy contents PROT_NONE 7 PROT_NONE Memory Map input.txt Buffer File Log File

  29. 1 x[0] 7 x[1024] x[2048] x[3072] 2 9 Implementation Sketch PROT_READ PROT_NONE int a; xbegin(); a = x[0]; x[1024] += a+1; PROT_READ| PROT_WRITE xend(); Segmentation Fault (2nd) PROT_NONE 7 PROT_NONE Memory Map input.txt Buffer File Log File

  30. 1 x[0] 7 x[1024] x[2048] x[3072] 2 9 Implementation Sketch PROT_READ PROT_NONE int a; xbegin(); a = x[0]; x[1024] += a+1; PROT_READ| PROT_WRITE xend(); PROT_NONE 2 PROT_NONE Memory Map input.txt Buffer File Log File

  31. 1 x[0] 7 x[1024] x[2048] x[3072] 2 9 Implementation Sketch PROT_READ PROT_NONE int a; xbegin(); a = x[0]; x[1024] += a+1; PROT_READ| PROT_WRITE xend(); log on disk PROT_NONE 2 2 PROT_NONE Memory Map input.txt Buffer File Log File

  32. 1 x[0] 1 x[1024] x[2048] x[3072] 2 9 Implementation Sketch PROT_NONE int a; xbegin(); a = x[0]; x[1024] += a+1; PROT_NONE 2 xend(); xend(); copy contents PROT_NONE 2 2 PROT_NONE Memory Map input.txt Buffer File Log File

  33. Focus on PMA Rebalance insert (tree, key, value) { xbegin(); x=find_location_in_pma(tree->static_index, key); insert_into_pma(tree->pma, key, value, x); fix_static_index(tree->static_index); xend(); } rebalance(tree->pma); In this talk, we illustrate the problems of transaction I/O considering a transactional rebalance of the packed memory array.

  34. 0 0 6 -- 1 5 6 -- 2 3 4 6 -- -- 0 4 -- 6 4 -- 1 5 -- 5 -- 3 -- 2 4 2 -- 1 -- 3 5 -- Rebalance of an In-Memory Array void rebalance(int* x, int n) { int i; int count = 0; for (i = 0; i < n; i++) { if (x[i] != EMPTY_SLOT) { x[count] = x[i]; count++; } } // Slide everything left int j = count-1; double spacing = 1.0*n/count; for (i = n-1; i >= 0; i--) { if (floor(j*spacing) == i) { x[i] = x[j]; } else { x[i] = EMPTY_SLOT; } } } // Redistribute items // from right

  35. Rebalance with Explicit I/O Why do we want to avoid performing explicit I/O to read/write data blocks? void rebalance_with_I/O(int n) { int i; int count = 0; int* y; int* z; y = read_block(0); z = read_block(0); for (i = 0; i < n; i++) { if (i % B == 0) { y = read_block(i/B); } if (y[i%B] != EMPTY_SLOT) { if (count % B == 0) { z = read_block(count/B); } z[count%B] = y[i%B]; count++; } } ... } Issues: • What if the data does not fit into memory? We must have buffer management code somewhere. • A cache-oblivious algorithm does not know the value of B! write_block(…)

  36. Rebalance using Memory Mapping • What if the data does not fit into memory? void rebalance_with_mmap(int n) { int i; int count = 0; x = mmap(“input.db”, n*sizeof(int)); for (i = 0; i < n; i++) { if (x[i] != EMPTY_SLOT) { x[count] = x[i]; count++; } } If we use memory mapping, then the OS automatically buffers pages that are accessed. • What value of B do we choose for a cache-oblivious algorithm? ... munmap(x, n*sizeof(int)); } I/O is transparent to the user. B does not appear in the application code. Using mmap, the code looks like the in-memory rebalance. But we still need concurrency control.

  37. Concurrent Rebalance? What happens if we want the rebalance to occur concurrently? void rebalance_with_mmap(int n) { int i; int count = 0; x = mmap(“input.db”, n*sizeof(int)); for (i = 0; i < n; i++) { if (x[i] != EMPTY_SLOT) { x[count] = x[i]; count++; } } If we use locks, what do we choose as the locking granularity? write_block(…) ... munmap(x, n*sizeof(int)); } If we use transactions, will the system need to unroll I/O when a transaction aborts to ensure the data on disk is consistent?

  38. Transactional Memory Mapping void rebalance_with_xMmap(int n) { int i; int count = 0; x = xMmap(“input.db”, n*sizeof(int)); xbegin(); for (i = 0; i < n; i++) { if (x[i] != EMPTY_SLOT) { x[count] = x[i]; count++; } } Our solution: Replace mmap with xMmap, and use transactions for concurrency control. Transaction system maintains multiple versions of a page to avoid needing to unroll I/O. write_block(…) ... xend(); xMunmap(x, n*sizeof(int)); } Transactional memory mapping simplifies the code for a concurrent disk-resident data structure.

  39. -- -- 70 54 56 31 40 23 38 -- 1 10 6 -- 13 21 48 -- 39 45 -- 24 -- 4 -- 83 37 7 16 59 15 -- Cache-Oblivious B-Tree Insert Static Cache-Oblivious Tree 21 10 45 4 16 38 54 4 10 16 21 38 45 54 83 2.5 ≤ n ≤ 7.5 2(a) Add 37 to packed memory array. 6 ≤ n ≤ 14 Packed Memory Array Density Thresholds

  40. -- 45 83 54 59 31 40 23 38 -- 1 10 6 -- 13 21 48 -- 39 -- 56 24 -- 4 -- -- 37 7 16 70 15 -- Cache-Oblivious B-Tree Insert Static Cache-Oblivious Tree 21 10 45 4 16 38 54 4 10 16 21 38 45 54 83 2.5 ≤ n ≤ 7.5 2(a) Add 37 to packed memory array. 2(b) Rebalance the PMA. 6 ≤ n ≤ 14 Packed Memory Array Density Thresholds

  41. 1 12 5 15 8 3 20 4 10 21 33 35 41 42 Dictionary Operations using a B+-Tree 4 10 20 42 B 51 70 77 85 The branching factor of the tree and the size of a block on disk are both q(B). For a B+-tree, the data is stored at the leaves. The keys at an interior node represent the maximum key of that node’s subtree. But to build the tree, we need to know the value of B…

  42. Cache-Oblivious B-tree [BDFC00] Operations Cost in Block Transfers Search(key) O(logBN) Insert(key, value) O(logBN)** Delete(key, value) O(logBN)** RangeQuery(start, end) O(logBN + k/B)* *Bound assumes range query finds k items with keys between start and end.** Amortized bound. It is possible to support dictionary operations cache-obliviously, i.e., with a data structure that does not know the value of B. The cache-oblivious B-tree (COB-tree) achieves the same asymptotic (amortized) bounds as a B+-tree.

  43. Cache-Oblivious B-Tree Overview Static Cache-Oblivious Binary Tree The static tree is used as an index into a packed memory array. To perform an insert, insert into the packed memory array, and update the static tree. When the packed memory array becomes too full (empty), rebuild and grow (shrink) the entire data structure.

  44. … … … … 1 1 1 1 1 2 2 2 2 2 3 3 3 3 3 N1/4 N1/4 N1/4 N1/4 N1/4 Static Cache-Oblivious Binary Tree Static Cache-Oblivious Binary Tree: size q(N1/2) size q(N1/4) … 1 3 N1/2 2 Divide tree into q(N1/2) subtrees of size q(N1/2). Layout each subtree contiguously in memory, recursively.

  45. 31 1 6 -- 10 70 21 -- 56 -- 23 -- 38 54 -- 13 -- 16 59 7 15 -- 83 -- 40 24 45 -- -- 4 48 38 Packed Memory Array A packed memory array uses a contiguous section of memory to store elements in order, but with gaps.For sections of size 2k, gaps are spaced to maintain to specified density thresholds that become arithmetically stricter as k increases. [5/16, 15/16] [6/16, 14/16] [4/16, 16/16] [7/16, 13/16] 24: Density between 4/16 and 16/16.25: Density between 5/16 and 15/16.26: Density between 6/16 and 13/16.27: Density between 7/16 and 12/16 Density Thresholds Example:

  46. 31 -- 54 70 56 21 -- 23 39 1 -- 10 6 -- 13 -- -- 16 -- 45 15 59 7 -- -- 40 83 38 48 -- 24 4 Example Cache-Oblivious B-Tree Static Cache-Oblivious Tree 21 10 45 4 16 38 54 4 10 16 21 38 45 54 83 [5/16, 15/16] [6/16, 14/16] [4/16, 16/16] Packed Memory Array Density Thresholds [7/16, 13/16]

  47. 6 -- 39 70 13 54 -- -- -- 56 23 31 21 10 -- 1 -- -- -- 59 83 48 40 45 4 38 -- -- 7 15 16 24 Cache-Oblivious B-Tree Insert Static Cache-Oblivious Tree 21 10 45 4 16 38 54 4 10 16 21 38 45 54 83 Insert 37:

  48. 31 1 -- 56 70 -- 54 39 13 -- 6 23 21 10 -- -- -- -- 38 59 83 48 45 40 4 24 -- 16 -- 7 15 -- Cache-Oblivious B-Tree Insert Static Cache-Oblivious Tree 21 10 45 4 16 38 54 4 10 16 21 38 45 54 83 • Insert 37: • Find correct section of PMA location using static tree. 37

  49. 31 1 -- 56 70 -- 54 39 13 -- 6 23 21 10 -- -- -- -- 38 59 83 48 45 40 4 24 -- 16 -- 7 15 -- Cache-Oblivious B-Tree Insert Static Cache-Oblivious Tree 21 10 45 4 16 38 54 4 10 16 21 38 45 54 83 • Insert 37: • Find correct section of PMA location using static tree. • Insert into PMA. This step possibly requires a rebalance. 37

More Related