1 / 30

Multi-dimensional Range Query Processing on the GPU

Multi-dimensional Range Query Processing on the GPU. Beomseok Nam Date Intensive Computing Lab School of Electrical and Computer Engineering Ulsan National Institution of Science and Technology, Korea. Multi-dimensional Indexing.

aletha
Download Presentation

Multi-dimensional Range Query Processing on the GPU

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Multi-dimensional Range Query Processing on the GPU Beomseok Nam Date Intensive Computing Lab School of Electrical and Computer Engineering Ulsan National Institution of Science and Technology, Korea

  2. Multi-dimensional Indexing • One of the core technology in GIS, scientific databases, computer graphics, etc. • Access pattern into Scientific Datasets • Multidimensional Range Query • Retrieves data that overlaps given rangeof values • Ex) SELECT temperature FROM dataset WHERE latitude BETWEEN 20 AND 30 AND longitude BETWEEN 50 AND 60 • Multidimensional indexing trees • KD-Trees, KDB-Trees, R-Trees, R*-Trees • Bitmap index • Multi-dimensional indexing is one of thethings that do not work well in parallel.

  3. Multi-dimensional Indexing Trees: R-Tree • Proposed by Antonin Guttman (1984) • Stored and indexed via nested MBRs (Minimum Bounding Rectangles) • Resembles height-balanced B+-tree

  4. Multi-dimensional Indexing Trees: R-Tree • Proposed by A. Guttman • Stored and indexed via nested MBRs (Minimum Bounding Rectangles) • Resembles height-balanced B+-tree An Example Structure of an R-Tree Source: http://en.wikipedia.org/wiki/Image:R-tree.jpg

  5. Motivation • GPGPUhas emerged as new HPC parallel computing paradigm. • Scientific data analysis applications are major applications in HPC market. • A common access pattern into scientific datasets is multi-dimensional range query. • Q: How to parallelize multi-dimensional range query on the GPU?

  6. MPES (Massively Parallel Exhaustive Scan) • This is how GPGPU is currently utilized • Achieve the maximum utilization of GPU. • Simple, BUT we should access ALL the datasets. thread[0] thread[1] thread[2] thread[3] thread[K-1] … Divide the Total datasets by the number of threads

  7. Parallel R-Tree Search • Basic idea • Compare a given query range with multiple MBRs of child nodes in parallel SMP Node E SPs Each SP compares an MBB with a Query Q : ith Query Global Memory Node A Node C Node B Node E Node F Node G Node D

  8. Recursive Search on GPU simply does not work • Inherently spatial indexing structures such as R-Trees or KDB-Trees are not well suited for CUDA environment. • irregular search pathand recursion make it hard to maximize the utilization of GPU • 48K shared memory will overflow when tree height is > 5

  9. MPTS (Massively Parallel 3 Phase Scan) • Leftmost search • Choose the leftmost child node no matter how many child nodes overlap • Rightmost search • Choose the rightmost child node no matter how many child nodes overlap • Parallel Scanning • In between two leaf nodes, perform massively parallel scanning to filter out non-overlapping data elements. pruned out pruned out

  10. MPTS improvementusing Hilbert Curve • Hilbert Curve: Continuous fractal space-filling curve • Map multi-dimensional points onto 1D curve • Recursively defined curve • Hilbert curve of order n is constructed from four copies of the Hilbert curve of order n-1, properly oriented and connected. • Spatial Locality Preserving Method • Nearby points in 2D are also close in the 1D first order 2nd order 3rd order Image source: Wikipedia

  11. MPTS improvementusing Hilbert Curve • Hilbert curve is well known for it spatial clustering property. • Sort the data along with Hilbert curve • Cluster similar data nearby • The gap between leftmost leaf node and the rightmost leaf node would be reduced. • The number of visited nodes would decrease pruned out pruned out

  12. MPTS improvementusing Hilbert Curve • Hilbert curve is well known for it spatial clustering property. • Sort the data along with Hilbert curve • Cluster similar data nearby • The gap between leftmost leaf node and the rightmost leaf node would be reduced. • The number of visited nodes would decrease

  13. Drawback of MPTS • MPTS reduces the number of leaf nodes to be accessed, but still it accesses a large number of leaf nodes that do not have requested data. • Hence we designed a variant of R-trees that work on the GPU without stack problem and does not access leaf nodes that do not have requested data. • MPHR-Trees (Massively Parallel Hilbert R-Trees)

  14. MPHR-tree (Massively Parallel Hilbert R-Tree)Bottom-up construction on the GPU 1. Sort data using Hilbert curve index

  15. MPHR-tree (Massively Parallel Hilbert R-tree)Bottom-up construction on the GPU 2. Build R-trees in a bottom-up fashion Store maximum Hilbert value max along with MBR

  16. MPHR-tree (Massively Parallel Hilbert R-tree)Bottom-up construction on the GPU 2. Build R-trees in a bottom-up fashion Store maximum Hilbert value max along with MBR

  17. MPHR-tree (Massively Parallel Hilbert R-tree)Bottom-up construction on the GPU • Basic idea • Parallel reduction to generate an MBR of a parent node and to get a maximum Hilbert value. R3 159 R4 R5 R6 R7 R8 R9 R10 R11 R12 6 26 44 47 67 96 105 130 159 R1 R2 level n+1 44 96 build the tree bottom-up in parallel level n … … … thread[0] thread[0] thread[0] thread[K-1] thread[K-1] thread[K-1] SMP0 SMP1 SMP2

  18. MPHR-tree (Massively Parallel Hilbert R-tree)Searching on the GPU R1 R2 159 231 • Iterate leftmost search and parallel scan using Hilbert curve index • leftmostSearch() visits leftmost search path whose Hilbert index is greater than the given Hilbert index R3 R4 R5 R6 R7 44 96 159 210 231 Left-most Search /Find leaf node 1 5 D1 D2 D3 D4 D5 D6 D7 D8 D9 D10 D11 D12 D13 D14 Left-most Search 6 26 44 47 67 96 105 130 159 182 200 210 224 231 6 2 lastHilbertIndex = 0; while(1){ leftmostLeaf=leftmostSearch(lastHilbertIndex, QueryMBR); if(leftmostLeaf < 0) break; lastHilbertIndex = parallelScan(leftmostLeaf); } level 1 3 7 4 level 0 keep parallel scanning if there exist overlapping leaf nodes

  19. MPTS vs MPHR-Tree MPTS MPHR-Trees • Search complexity of MPHR-Tree k is the number of leaf nodes that have requested data pruned out pruned out pruned out pruned out pruned out

  20. Braided Parallelism vs Data Parallelism • Braided Parallel Indexing • Multiple queriescan be processed in parallel. • Data Parallel Indexing (Partitioned Indexing) • Single query is processed by all the CUDA SMPs • partitioned R-trees Braided Parallel Indexing Data Parallel Indexing

  21. Performance EvaluationExperimental Setup (MPTS vs MPHR-tree) • CUDA Toolkit 5.0 • Tesla Fermi M2090 GPU card • 16 SMPs • Each SMP has 32 CUDA cores, which enables 512 (16x32) threads to run concurrently. • Datasets • 40 millions of 4D point data sets in uniform, normal, and Zipf's distribution

  22. Performance Evaluation MPHR-tree Construction • 12 K page (fanouts=256), 128 CUDA blocks X64 threads per block • It takes only 4 seconds to build R-trees with 40 millions of data while CPU takes more than 40 seconds. ( 10x speed up ) • Without including memory transfer time, it takes only 50 msec. (800x speed up)

  23. Performance Evaluation MPTS Search vs MPES Search • 12K page (fanouts=256), 128 CUDA blocks X64 threads per block, selection ratio = 1% • MPTS outperforms MPES and R-trees on Xeon E5506 (8cores) • In high dimensions, MPTS accesses more memory blocks but the number of instructions executed by a warp is smaller than MPES

  24. Performance Evaluation MPHR-tree Search • 12 K page (fanouts=256), 128 CUDA blocks X64 threads per block • MPHR-tree consistently outperforms other indexing methods • In terms of throughput, braided MPHR-Trees shows an order of magnitude higher performance than multi-core R-trees and MPES. • In terms of query response time, partitioned MPHR-trees shows an order of magnitude faster performance than multi-core R-trees and MPES.

  25. Performance Evaluation MPHR-tree Search • In cluster environment, MPHR-Trees show an order of magnitude higher throughput than LBNL FastQuery library. • LBNL FastQuery is a parallel bitmap indexing library for multi-core architectures.

  26. Summary • Brute-force parallel methods can be refined with more sophisticated parallel algorithms. • We proposed new parallel tree traversal algorithms and showed they significantly outperform the traditional recursive access to hierarchical tree structures.

  27. Q&A • Thank You

  28. MPTS improvementusing Sibling Check • When a current node doesn’t have any overlapping children, check sibling nodes! • It’s always better to prune out tree nodes in upper level.

  29. CUDA • GPGPU (General Purpose Graphics Processing Unit) • CUDA is a set of developing tools to create applications that will perform execution on GPU • GPUs allow creation of very large number of concurrently executed threads at very low system resource cost. • CUDA also exposes fast shared memory (48KB) that can be shared between threads. Tesla M2090 : 16X32 = 512 cores Image source: Wikipedia

  30. Grids and Blocks of CUDA Threads Host Device Kernel 1 Kernel 2 Grid 1 Block (0, 0) Block (0, 1) Block (1, 0) Block (1, 1) Block (2, 0) Block (2, 1) Grid 2 Block (1, 1) Thread (0, 1) Thread (0, 2) Thread (0, 0) Thread (1, 2) Thread (1, 0) Thread (1, 1) Thread (2, 1) Thread (2, 2) Thread (2, 0) Thread (3, 1) Thread (3, 2) Thread (3, 0) Thread (4, 1) Thread (4, 0) Thread (4, 2) • A kernel is executed as a grid of thread blocks • All threads share data memory space • A thread block is a batch of threads that can cooperate with each other by: • Synchronizing their execution • For hazard-free shared memory accesses • Efficiently sharing data through a low latency shared memory • Two threads from two different blocks cannot cooperate Courtesy: NVIDIA

More Related