1 / 37

The Buffer Tree

The Buffer Tree. Lars Arge. Presented by Or Ozery. I/O Model. Previously defined: N = # of elements in input M = # of elements that fit into memory B = # of elements per block Measuring in terms of # of blocks: n = N / B m = M / B. I/O Model vs. RAM Model.

marcy
Download Presentation

The Buffer Tree

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The Buffer Tree Lars Arge Presented by Or Ozery

  2. I/O Model • Previously defined: • N = # of elements in input • M = # of elements that fit into memory • B = # of elements per block • Measuring in terms of # of blocks: • n = N / B • m = M / B The Buffer Tree

  3. I/O Model vs. RAM Model The Buffer Tree

  4. Online vs. Batched Online Problems Batched Problems • A single command is given each time. • Must be processed before other commands are given. • Should be performed in a good W.C. time. • For example: Searching. • A stream of commands is given. • Can perform commands in any legal order. • Should be performed in a good amortized time. • For example: Sorting. The Buffer Tree

  5. Motivation • We’ve seen that using an online-efficient data structure (B-tree) for a batched problem (sorting) is inefficient. • We thus would like to design a data structure for efficient use on batched problems, such as: • Sorting • Minimum reporting (priority queue) • Range searching • Interval stabbing The Buffer Tree

  6. The Main Idea • There are 2 reasons why B-tree sorting is inefficient: • We work element-wise instead of block-wise. • We don’t take advantage of the memory size m. • We can fix both problems by using buffers: • It allows us to accumulate elements into blocks. • Using buffers of size Θ(m), we fully utilize the memory. The Buffer Tree

  7. The Buffer Tree • (m/4, m)-tree ⇒ branching factor Θ(m). • Elements are stored in leaves, in blocks ⇒ O(n) leaves. • Each internal node has a buffer of size m. The Buffer Tree

  8. Basic Properties • The height of the tree is O(logmn). • The number of internal nodes is O(n/m). • From now on define: • Leaf nodes: nodes that have children which are leaves. • Internal nodes: nodes that are not leaf nodes. • The buffer tree uses linear space: • Each leaf takes O(1) space ⇒ O(n) space. • Each node takes O(m) space ⇒ O(n) space. The Buffer Tree

  9. Processing Commands • We wait until we have a block of commands, then we insert it to the buffer of the root. • Because we process commands in a lazy way, we need to time-stamp them. • When the buffer of the root gets full, we empty it, using a buffer-emptying process (BEP): • We distribute elements to the buffers one level down. • If any of the child buffers gets full, we continue in recursion. The Buffer Tree

  10. Internal Node BEP • Sort the elements in the buffer while deleting corresponding insert and delete elements. • Scan through the sorted buffer and distribute the elements to the appropriate buffers one level down. • If any of the child buffers is now full, run the appropriate BEP recursively. • Internal node BEP takes O(x + m), where x is the number of elements in the buffer. The Buffer Tree

  11. Leaf Node BEP • Sort the elements in the buffer as for internal nodes. • Merge the sorted buffer with the leaves of the node. • If the number of leaves increased: • Place the smallest elements in the leaves of the node. • Repeatedly insert one block of elements and rebalance. • If the number of leaves decreased: • Place the elements in sorted order in the leaves, and append “dummy-blocks” at the end. • Repeatedly delete one dummy block and rebalance. The Buffer Tree

  12. Rebalancing - Fission The Buffer Tree

  13. Rebalancing - Fusion The Buffer Tree

  14. Rebalancing Cost • Rebalancing starts when inserting/deleting a block. • The leaf node which sparked the rebalancing, will not cause rebalancing for the next O(m) inserts/deletes. • Thus the total number of rebalancing operations on leaf nodes is O(n/m). • Each rebalancing operation on a leaf node can span O(logmn) rebalancing operations. • So there are O((n/m) logmn) rebalancing operations, each costs O(m) ⇒ Rebalancing takes O(n logmn). The Buffer Tree

  15. Summing Up • We’ve seen rebalancing takes O(n logmn). • BEP cost: • BEP of full buffers is linear in the number of blocks in the buffer ⇒ Each element pays O(1/B) to be pushed one level down the tree. • Because there are O(logmn) levels in the tree, each element pays O(logmn / B) ⇒ BEP takes O(n logmn). • Therefore, a sequence of N operations on an empty buffer tree takes O(n logmn). The Buffer Tree

  16. Sorting • After inserting all N items to the tree, we need to empty all the buffers. We do this in a BFS order. • How much does emptying all buffers cost? • Emptying a buffer takes O(m) amortized. • There are O(n/m) buffers ⇒ Total cost is O(n). • Thus sorting using a buffer tree takes O(n logmn). The Buffer Tree

  17. Priority Queue • We can easily transform our buffer tree into a PQ by adding support for a delete-min operation: • The smallest element is found on the path from the root to the leftmost leaf. • Therefore a delete-min operation will empty all the buffers on the above path in O(m logmn). • To make-up for the above cost, we delete the M/4 smallest elements and keep them in memory. • This way we can answer the next M/4 delete-min’s free. • Thus our PQ supports N operations in O(n logmn). The Buffer Tree

  18. Time-Forward Processing • The problem: • We are given a topologically ordered DAG. • For each vertex v there is a function fv which depends on all fu where u is a predecessor of v. • The goal is to compute fv for all v. The Buffer Tree

  19. TWP Using Our PQ • For each vertex v (sorted in topological order): • Extract the minimum d-(v) elements from the PQ. • Use the extracted elements to compute fv. • For each edge (v, u) insert fv in the PQ with priority u. • The above works in O(n logmn). The Buffer Tree

  20. Buffered Range Tree • We want to extend our tree to support range queries: • Given an interval [x1, x2], report all elements of the tree that our contained in it. • How will we distribute the query elements when emptying a buffer? • As long as the interval is contained in a sub-tree, send the query element to the root buffer of that sub-tree. • Otherwise, we split the query into its 2 query elements, and report the elements in the relevant sub-trees. The Buffer Tree

  21. Time Order Representation • We say that a list of elements is in time order representation (TOR) if it’s of the form D-S-I, where: • D is a sorted list of delete elements. • S is a sorted list of query elements. • I is a sorted list of insert elements. • Lemma 1: • A non-full buffer can be brought into TOR in O(m + r) where r · B is the number of queries reported in the process. The Buffer Tree

  22. Merging of TOR Lists • Lemma 2: • Let S1 and S2 be TOR lists such that all elements of S2 are older then the elements of S1. S1 and S2 can be merged into a TOR list in O(s1 + s2 + r) where s1 and s2 are the size in blocks of S1 and S2 and r · B is the number of queries reported in the process. The Buffer Tree

  23. Proof of Lemma 2 • Let Sj = dj - sj - ij. Time The Buffer Tree

  24. Full Sub-Tree Reporting • Lemma 3: • All buffers of a sub-tree with x leaves can be emptied and collected to a TOR list in O(x + r). • Proof: • For each level, prepare a TOR list of its elements. • Merge the TOR lists of all levels. After step 1 After step 2 The Buffer Tree

  25. Internal Node BEP • Compute the TOR of the buffer. • Scan the delete elements and distribute them. • Scan the range search elements and determine which sub-trees should have their elements reported. • For each such sub-tree: • Remove the delete elements from (2) and store them in temporary place. • Collect the elements of the sub-tree into TOR. • Merge this TOR with the TOR of the removed delete elements. • Distribute the insert and delete elements to leaf buffers. • Merge a copy of the leaves with the TOR. • Remove the range search elements from the TOR. • Report the resulting elements to whoever needs it. • Distribute the range search elements. • Distribute the insert elements (if sub-tree was emptied, to leaf buffers). • If any child buffer got full, apply the BEP recursively. The Buffer Tree

  26. Leaf Node BEP • Construct the TOR of the elements in the buffer. • Merge the TOR with the leaves. • Remove all range search elements and continue the BEP as in the normal buffer tree. The Buffer Tree

  27. Analysis • The main difference from the normal buffer tree is the action of reporting all elements of a sub-tree. • By lemma 3, this action has a linear cost. • We thus can split this cost between the delete elements and query elements, as each element gets either deleted or reported. • Thus, a series of N operations on our buffered range tree costs O(n logmn + r). The Buffer Tree

  28. Orthogonal Line Intersection • The problem: • Given N line segments parallel to the axes, report all intersections of orthogonal segments. The Buffer Tree

  29. OLI Using Our Range Tree • Sort the segments, once by their top y coordinate, and once by their bottom y coordinate. • Merge the 2 sorted list of segments: • When encountering a top coordinate of a vertical segment, insert its x coordinate to the tree. • When encountering a bottom coordinate of a vertical segment, delete its x coordinate from the tree. • When encountering a horizontal segment, insert a query for its endpoints. • The above takes an optimal O(n logmn + r). The Buffer Tree

  30. Buffered Segment Tree • We switch parts between points and intervals: • We insert and delete intervals from the tree. • We use points as queries to get reported on all intervals stabbed by a point. • We assume the intervals has (distinct) endpoints from a fixed given set E of size N. • The elements in leaves will be the points of E. • We build our tree bottom-up in O(n). The Buffer Tree

  31. Buffered Segment Tree • Define:slabs, multi-slabs, short/long segments. C E A B D F The Buffer Tree

  32. Internal Node BEP • Repeatedly load m/2 blocks of elements into memory, and perform the following: • For every multi-slab list insert the relevant long segments. • For every multi-slab list that is stabbed by a point, report intervals and remove expired ones. • Distribute segments and queries. • If there’s a full child buffer, apply BEP recursively. • The above costs O(m + x + r) = O(x + r) amortized. The Buffer Tree

  33. Analysis • Because the tree structure is static, there is no rebalancing, and also no emptying of non-full buffers. • Therefore the only cost is emptying of full buffers, which is linear. • Thus a series of N operations on our segment tree takes O(n logmn + r). • A write (flush) operation takes O(n logmn). • Therefore we have the desired O(n logmn + r). The Buffer Tree

  34. Batched Range Searching • The problem: • Given N points and N axis parallel rectangles in the plane, report all points inside each rectangle. The Buffer Tree

  35. BRS Using Our Segment Tree • Sort points and rectangles by their top y coordinate. • Scan the sorted list: • For each rectangle, insert the interval that corresponds to its horizontal side, with a delete time matching its bottom y coordinate. • For each point, insert a stabbing query. • Flush the tree (empty all buffers). • The above takes an optimal O(n logmn + r). The Buffer Tree

  36. Pairwise Rectangle Intersection • The problem: • Given N axis parallel rectangles in the plane, report all intersecting pairs. The Buffer Tree

  37. PRI Using Our Segment Tree • 2 rectangles in the plane intersect ⇔ one of the following holds: • They have intersecting edges. • One contains the other ⇒ One contains the other’s midpoint. • We have shown an O(n logmn + r) solution for both (1) and (2). • Therefore we have an optimal O(n logmn + r) solution for the PRI problem. The Buffer Tree

More Related