1 / 76

P-Trees: Universal Data Structure for Query Optimization to Data Mining

P-Trees: Universal Data Structure for Query Optimization to Data Mining. Most people have Data from which they want information. So, most people need DBMSs whether they know it or not.

lahela
Download Presentation

P-Trees: Universal Data Structure for Query Optimization to Data Mining

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. P-Trees: Universal Data Structure forQuery Optimization to Data Mining Most people have Data from which they want information. So, most people need DBMSs whether they know it or not. The main component of any DBMS is the query processor, but so far QPs deal with standard workload only (on left). On the standard_query end, we have much work yet to be done to solve the problem of delivering standard workload answers with low response times (D. DeWitt, ACM SIGMOD’02). On the Data Mining end, we have barely scratched the surface. (But those scratches have made the difference between becoming the biggest corporation in the world and filing for bankruptcy – Walmart vs. KMart) Simple Searching and aggregating Standard querying Machine Learning Data Mining Complex queries (nested, EXISTS..) FUZZY queries (e.g., BLAST searches, .. OLAP (rollup, drilldown, slice/dice.. Supervised - Classification Regression Unsupervised- Clustering Association Rule Mining SELECT FROM WHERE These notes contain NDSU confidential & Proprietary material. Patents pending on bSQ, Ptree technology

  2. Data Mining Querying: ask specific questions - expect specific answers. We will get back to querying later. Data Mining: “Go into the DATA MOUNTAIN. Come out with gems” (but also, very likely, some fool’s gold. Relevance and Interestingness analysis assays the gems (helps pick out the valuable information). A Universal Model for Association Rule Mining, Classification and Clustering of a data table,R(A1..An) where the Ais are feature attributes assumed numeric (categorical attributes can be coded numeric) • First order the rows: • Rids or RRNs provide an ordering • arrival ordinal provides an ordering • Peano order of pixels in an image provides an ordering • Raster order does also, but • For images, raster order should first be converted to Peano order since a raster line is not a geometric or a geographic object. More later. • In raster order, pixel-ids, (x, y), are sorted by bit-position in the order, x1x2..xny1y2..yn, while Peano order is, x1y1x2y2…xnyn

  3. Peano Tree (P-tree) Data Structure for Data Mining • Given a data table, R(A1,…, An) • Order it (e.g., arrival order or RRN) • Decompose it into attribute projections (maintain the ordering on each) R  R[Ai] i=1,…,n (Band SeQential or BSQ projections) • Decompose each attribute projection by bit position into bit-projections R[Ai]  Rij j=1,…,mi(Bit SeQential or bSQ projections) (e.g., if each Ai-value datatype is a bytes, then mi=8 for all i) • Build d-dimensional basic P-tree from each bit-projection {Pij | i=1,…,n and j=1,…, mi} R  R[Ai]  Rij  basic P-trees, Pij How is this last step done?

  4. 1-D CP-tree 1 1 1 1 1 1 0 0 1 1 1 1 1 0 0 0 1 1 1 1 1 1 0 0 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 A bit-projection (bSQ file) from a table with 64 rows Construct 1-D CP using recursive l-r-half counts (inodes have bitant counts) 55 31 24 13 11 16 15 6 7 6 5 8 7 4 2 4 4 3 2 4 1 3 4 1 2 1 2 2 0 0 1 2 0 0 1 1 0 1 0

  5. 2-D CP-tree (same bSQ file) 1 1 1 1 1 1 0 0 1 1 1 1 1 0 0 0 1 1 1 1 1 1 0 0 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 Construct the 2D tree by removing every other row (inodes have quadrant counts) 55 13 11 16 15 4 2 4 4 3 2 4 1 3 4 4 4 Eliminated a pure quadrant 1 0 1 0 0 1 0 0 0 1 1 0 1 1 0 1 1 1 1 0 Eliminated pure quadrants Eliminated a pure quadrant Eliminated a pure quadrant Eliminated pure quadrants

  6. 4-D CP-tree from the same bSQ file? 1 1 1 1 1 1 0 0 1 1 1 1 1 0 0 0 1 1 1 1 1 1 0 0 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 Remove every other row (insufficient number of rows!) Can construct a 4D with 2D leaves: 55 4 2 4 4 4 4 3 4 3 4 4 4 4 2 4 1 0 1 1 1 1 0 1 0 0 1 0 0 1 1 0 1 0 1 1 0

  7. What about 3-D? (same bSQ file) 1 1 1 1 1 1 0 0 1 1 1 1 1 0 0 0 1 1 1 1 1 1 0 0 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 Construct the 3D tree from 2D by removing every other row spliting the other rows (inodes have octant counts) 55 6 6 8 8 8 7 7 5 1 1 1 1 1 0 1 0 1 1 1 1 1 0 1 0 1 1 1 1 1 1 1 0 1 1 1 1 0 0 1 0 1 1 1 1 1 1 0 1

  8. Summary • Given a feature relation, R(A1,…, An) • 1. Order rows (RRN, Rid, ArrivalOrdinal, a Raster Spatial Order, … • 2. Choose a dimension,d (or combination; d1, d2, ...) • 3. Choose fanout(s) (e.g., d1n1 d2n2 … drnr) Basic P-trees can be implemented in many formats: • CountP-Tree (CP) (inode contains quadrant-1-bit-count and child-pointers) • PredicateP-trees (inodes: 1 iff predicate=true thruout quadrant and child ptrs) • Pure1-Trees (P1), • Pure0-Trees (P0), • PureNot1-Tree (NP1), • PureNot0-Tree (NP0) • ValueP-trees (VP) • TupleP-trees (TP) • HalfPure-trees (HP) • Above are lossless, compressed, DM-Ready • Interval-Ptree • Box-Ptree • How do we datamine heterogeneous datasets? • i.e., R,S,T.. describing same entity, different keys/attribs • Universal Relation: transform into one relation (union keys?) • Key Fusion: R(K..); S(K’..) Mine them as separate relations but map keys using a tautology. • The two are methods are related. Universal Rel approach usually includes definining a universal key to which all local keys are mapped (using a (possibly fragmented) tautological lookup table)

  9. Spatial Data Formats (e.g., images with natural 2-D structure and coordinates; (x,y), - raster ordering) • BAND-1 • 54 127 • (1111 1110) (0111 1111) • 4 193 • (0000 1110) (1100 0001) • BAND-2 • 7 240 • (0010 0101) (1111 0000) • 00 19 • (1100 1000) (0001 0011) BSQ format (2 files) Band 1: 254 127 14 193 Band 2: 37 240 200 19

  10. Spatial Data Formats (Cont.) • BAND-1 • 54 127 • (1111 1110) (0111 1111) • 4 193 • (0000 1110) (1100 0001) • BAND-2 • 7 240 • (0010 0101) (1111 0000) • 00 19 • (1100 1000) (0001 0011) BSQ format (2 files) Band 1: 254 127 14 193 Band 2: 37 240 200 19 BIL format (1 file) 254 127 37 240 14 193 200 19

  11. Spatial Data Formats (Cont.) • BAND-1 • 54 127 • (1111 1110) (0111 1111) • 4 193 • (0000 1110) (1100 0001) • BAND-2 • 7 240 • (0010 0101) (1111 0000) • 00 19 • (1100 1000) (0001 0011) BSQ format (2 files) Band 1: 254 127 14 193 Band 2: 37 240 200 19 BIL format (1 file) 254 127 37 240 14 193 200 19 BIP format (1 file) 254 37 127 240 14 200 193 19

  12. Spatial Data Formats (Cont.) bSQ format (16 files) (related to bit planes in graphics) B11 B12 B13 B14 B15 B16 B17 B18 B21 B22 B23 B24 B25 B26 B27 B28 1 1 1 1 1 1 1 0 0 0 1 0 0 1 0 1 0 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 1 1 1 0 1 1 0 0 1 0 0 0 1 1 0 0 0 0 0 1 0 0 0 1 0 0 1 1 • BAND-1 • 54 127 • (1111 1110) (0111 1111) • 4 193 • (0000 1110) (1100 0001) • BAND-2 • 7 240 • (0010 0101) (1111 0000) • 00 19 • (1100 1000) (0001 0011) BSQ format (2 files) Band 1: 254 127 14 193 Band 2: 37 240 200 19 BIL format (1 file) 254 127 37 240 14 193 200 19 BIP format (1 file) 254 37 127 240 14 200 193 19

  13. 1 1 1 1 1 1 0 0 1 1 1 1 1 0 0 0 1 1 1 1 1 1 0 0 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 Suppose we start with a bit-projection of a raster-ordered spatial file (image)?First, re-order into Peano order. 55 55 1 1 1 1 1 1 0 0 1 1 1 1 1 0 0 0 1 1 1 1 1 1 0 0 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 16 16 8 8 15 15 16 16 3 3 0 0 4 4 1 1 4 4 4 4 3 3 4 4 1 1 1 1 1 1 0 0 0 0 0 0 1 1 0 0 1 1 1 1 0 0 1 1 Raster ordered bSQ file. Spatial arrangement Shows Peano order • Peano or Z-ordering • Pure (Pure-1/Pure-0) quadrant • Root Count • Level • Fan-out • QID (Quadrant ID)

  14. Same example 001 55 1 1 1 1 1 1 0 0 1 1 1 1 1 0 0 0 1 1 1 1 1 1 0 0 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 0 1 2 3 16 8 15 16 2 3 0 4 1 4 4 3 4 3 111 1 1 1 0 0 0 1 0 1 1 0 1 2 . 2 . 3 ( 7, 1 ) 10.10.11 ( 111, 001 ) • Peano or Z-ordering • Pure (Pure-1/Pure-0) quadrant • Root Count Level-3 Level-2 Level-1 Level-0 • Level • Fan-out • QID (Quadrant ID)

  15. Some Common types of Ptrees, given R(A1..An) Review: given a feature relation, R(A1,…, An) • 1. Order rows (RRN, Rid, ArrivalOrdinal, a Raster Spatial Order, … • 2. Choose a dimension, d (or combination; d1, d2, ...) and a fanout(s) (e.g., d1n1 d2n2 … drnr) Each bSQ file, Rij generates a BasicPtree Pij Each value, v, Ai, generates a ValuePtree, VPi(v) (1 iff purely v thruout quadrant) Each tuple (v1..vn) in R, gens TuplePtree, TP(v1..vn) (1 iff purely (v1..vn) thruout quadrant) Any row-predicate on R gens PredicatePtree, <pred>P (1 iff <pred> true thruout quadrant) Any interval, [l,u], in Ai, gens IntervalPtree, [l,u]Pi (1 iff v[l,u] thruout quadrant) Any box, [li,ui], in R gens a RectanglePtree[li,ui]P (1 iff (v1..vn)[l,u] thruout quadrant) (each Ptree can be expressed as a CountTree with inode-value=count or a BooleanTree with inode-value=bit) Basic Ptrees P111, …, P118, P121, …, P128, … P171, …, P178 attribute Value Ptree: P1(001)= P1’11 ^ P1’12 ^ P113 Value in 3-bit prec Tuple Ptree (1 if quad contains only that tuple, else 0) P(001, 010, 111)= P1(001) ^ P2(010) ^ P3(111) = P1’11^ P1’12^P113 ^P1’21 ^P122^ P1’23^ P131^P132^P133 tuple (1,2,7), in 3-bit precision

  16. Predicate Ptrees (inode: 1 iff condition=true thruout quadrant 1 1 1 1 1 1 0 0 1 1 1 1 1 0 0 0 1 1 1 1 1 1 0 0 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 CP: .--- 55 ---. / / \ \ 16 8 15 16 // \ \ // \ \ 3 0 4 1 443 4 //|\ //|\ //|\ 1110 0010 1101 P1 (Pure1) .---- 0 ----. / / \ \ 1 0 0 1 // \ \ // \ \ 0 0 1 0 11 0 1 //|\ //|\ //|\ 1110 0010 1101 P0 (Pure0) .---- 0 ----. / / \ \ 0 0 0 0 // \ \ // \ \ 0 1 0 0 00 0 0 //|\ //|\ //|\ 0001 1101 0010 NP0 (NotPure0) .---- 1 ----. / / \ \ 1 1 1 1 // \ \ // \ \ 1 0 1 1 11 1 1 //|\ //|\ //|\ 1110 0010 1101 NP1 (NotPure1) .---- 1 ----. / / \ \ 0 1 1 0 // \ \ // \\ 1 1 0 1 00 10 //|\ //|\ //|\ 0001 1101 0010 MP (Mixed) .---- 1 ----. / / \ \ 0 1 1 0 // \ \ // \ \ 1 0 0 1 00 1 0 Leafs always 0000 so omitted. Predicate Ptrees can be stored in QidVector (QV) format. mixed quadrant: (qid, ChildTruthVector) NP0-QV QidCV [] 1111 [1] 1011 [1.0] 1110 [1.3] 0010 [2] 1111 [2.2] 1101 NP1-QV QidCV [] 0110 [1] 1101 [1.0] 0001 [1.3] 1101 [2] 0010 [2.2] 0010 MP-QV QidCV [] 0110 [1] 1001 [2] 0010 P1-QV QidCV [] 1001 [1] 0010 [1.0] 1110 [1.3] 0010 [2] 1101 [2.2] 1101 P0-QV QidCV [] 0000 [1] 0100 [1.0] 0001 [1.3] 1101 [2] 0000 [2.2] 0010 HP (HalfPure) .---- 1 ----. / / \ \ 1 1 1 1 // \ \ // \ \ 1 0 1 0 11 1 1 //|\ //|\ //|\ 1110 0010 1101 HPtrees results from the HalfPure1 predicate: < 1BitCnt  Pure1Cnt/2 > Lossless. 1 means pure1 iff no child ptrs. 0 means pure0 iff no child ptrs. Delete any number of bottom levels = HPtree of coarser granularity. ANDing HPtrees: if any operand is 0, result = 0 if all operands are 1 and  has children, result = 1 else result could be 0 or 1 (depends upon children, but likely = 0) The Hptree of the complement of a bSQ file is the flip of the HPtree. HPtrees have the same leaves as Pure1Trees. HPtree is the “high-order bit” tree of the CPtree. HP-QV QidCV [] 1111 [1] 1010 [1.0] 1110 [1.3] 0010 [2] 1111 [2.2] 1101

  17. The P-tree Algebra (Complement, AND, OR, …) • Complementing a Ptree (Ptree for the flip of the bSQ file) (we use the “prime” notation) • Count-Ptree: formed by purity-complementing each count. • Purity-Ptree (P1, P0, NP0, NP1): formed by bit-flipping leaves Only. • HPtree: formed by bit-flipping all (comp = flip) (We use”underscore” for the flip of a tree) P1 = P0’ .---- 0 ---. / / \ \ 1 0 0 1 // \ \ // \ \ 0 0 1 0 11 0 1 //|\ //|\ //|\ 1110 0010 1101 P0 = P1’ .---- 0 ----. / / \ \ 0 0 0 0 // \ \ // \ \ 0 1 0 0 00 0 0 //|\ //|\ //|\ 0001 1101 0010 NP0 = NP1’ .---- 1 ----. / / \ \ 1 1 1 1 // \ \ // \ \ 1 0 1 1 11 1 1 //|\ //|\ //|\ 1110 0010 1101 NP1= NP0’= P1 .---- 1 ----. / / \ \ 0 1 1 0 // \ \ // \\ 1 1 0 1 00 10 //|\ //|\ //|\ 0001 1101 0010 P1V QidPgVc [] 1001 [1] 0010 [1.0] 1110 [1.3] 0010 [2] 1101 [2.2] 1101 P0V QidPgVc [] 0000 [1] 0100 [1.0] 0001 [1.3] 1101 [2] 0000 [2.2] 0010 NP0V QidPgVc [] 1111 [1] 1011 [1.0] 1110 [1.3] 0010 [2] 1111 [2.2] 1101 NP1V QidPgVc [] 0110 [1] 1101 [1.0] 0001 [1.3] 1101 [2] 0010 [2.2] 0010 P1 .---- 1 ---. / / \ \ 0 1 1 0 // \ \ // \ \ 1 1 0 1 00 1 0 //|\ //|\ //|\ 0001 1101 0010 P0 .---- 1 ----. / / \ \ 1 1 1 1 // \ \ // \ \ 1 0 1 1 11 1 1 //|\ //|\ //|\ 1110 0010 1101 NP0=P0 .---- 0 ----. / / \ \ 0 0 0 0 // \ \ // \ \ 0 1 0 0 00 0 0 //|\ //|\ //|\ 0001 1101 0010 NP1=P1 .---- 0 ----. / / \ \ 1 0 0 1 // \ \ // \\ 0 0 1 0 11 01 //|\ //|\ //|\ 1110 0010 1101 P1V QidPgVc [] 0110 [1] 1101 [1.0] 0001 [1.3] 1101 [2] 0010 [2.2] 0010 P0V QidPgVc [] 1111 [1] 1011 [1.0] 1110 [1.3] 1101 [2] 1111 [2.2] 1101 NP0V QidPgVc [] 0000 [1] 0100 [1.0] 0001 [1.3] 1101 [2] 0000 [2.2] 1101 NP1V QidPgVc [] 1001 [1] 0010 [1.0] 0001 [1.3] 0010 [2] 1101 [2.2] 1101

  18. ANDing (for all Truth-trees, just AND bit-wise) 1 1 1 1 1 1 0 0 1 1 1 1 1 0 0 0 1 1 1 1 1 1 0 0 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 1 1 1 1 0 0 0 0 1 1 1 1 0 0 0 0 1 1 1 1 0 0 0 0 1 1 1 1 0 0 0 0 1 1 1 1 0 0 0 0 1 1 0 1 0 0 0 0 0 1 0 0 0 0 0 0 1 1 1 1 0 0 0 0 1 1 1 1 0 0 0 0 1 1 1 1 0 0 0 0 1 1 1 1 0 0 0 0 1 1 1 1 0 0 0 0 1 1 1 1 0 0 0 0 1 1 0 1 0 0 0 0 1 1 0 0 0 0 0 0 Pure1-quad-list method: For each operand, list the qids of the pure1 quad’s in depth-first order. Do one multi-cursor scan across the operand lists , for every pure1 quad common to all operands, install it in the result. 0 100 101 102 12 132 2021220 221 22323 3 AND 020 2122231020 21220 221 223231 P1operand1 0 1 0 0 1 // \ \ // \\ 0 0 1 0 1 1 01 //|\ //|\ //|\ 1110 0010 1101 P0operand1 0 0 0 0 0 // \ \ // \ \ 0 1 0 0 0 0 00 //|\ //|\ //|\ 0001 1101 0010 NP0operand1 1 1 1 1 1 // \ \ // \\ 1 0 1 1 1 1 11 //|\ //|\ //|\ 1110 0010 1101 NP1operand1NP0’ 1 0 1 1 0 // \ \ // \\ 1 1 0 1 0 0 10 //|\ //|\ //|\ 0001 1101 0010 bitwise Depth first traversal using 1^1=1, 1^0=0, 0^0=0. AND P1operand2 0 1 0 0 0 / / \ \ 1 1 1 0 //|\ 0100 P0op2 = P1’op2 0 0 1 0 1 / / \ \ 0 0 0 0 //|\ 1011 NP0operand2 1 1 0 1 0 / / \ \ 1 11 1 //|\ 0100 NP1operand2NP0’ 1 0 1 1 1 / / \ \ 0 0 0 1 //|\ 1011 = P1op1^P1op2 0 1 0 0 0 // | \ 11 0 0 //|\ //|\ 1101 0100 P1op1^P0op2 = P1op1^P1’op20 0 0 0 1 // \ \ //\ \ 0 0 1 0 000 0 //|\ //|\ //|\ 1110 0010 1011 NP0op1^NP0op2 1 1 0 1 0 // | \ 11 1 1 //|\ //|\ 1101 0100 NP0op1^NP0’op2 1 0 1 1 1 // \ \ /// \ 1 0 1 1 000 1 //|\ //|\ //|\ 1110 0010 1011

  19. Association Rule Mining (ARM) • Association Rule Mining on R is a matter of finding all (qualifying) rules of the form, A C where A is a subset of tuples called the antecedent And is disjoint from the subset, C, called consequent.

  20. Precision Ag ARM example • Identifying high / low crop yields (usually a classification problem) • E.g., R( X, Y, R, G, B, Y ), R/G/B are red/green/blue reflectance from the pixel or grid cell at (x,y) • Y is the yield at (x,y). • Assume all are 8-bit values. • High Support and Confidence rules are searched for in which the consequent is entirely in the Yield attribute, such as: • [192,255]G[0,63]R [128,255]Y • How to apply rules? • Obtain rules from previous year’s data, then apply rules in the current year after each aerial photo is taken at different stages of plant growth. • By irrigating/adding Nitrate where lower Yield is indicated, overall Yield may be increased. • We note that this problem is more of a classification problem (classify Yield levels) – that is, the rules are classification rules, not association.

  21. Image Data Terminology TIFF image Yield Map • Pixel – a point in a space • Band – feature attribute of the pixels • Value – usually one byte (0~255) • Images have different numbers of bands • TM4/5: 7 bands (B, G, R, NIR, MIR, TIR, MIR2) • TM7: 8 bands (B, G, R, NIR, MIR, TIR, MIR2, PC) • TIFF: 3 bands (B, G, R) • Ground data: individual bands (Yield, Moisture, Nitrate, Temp, elevation…) RSI data can be viewed as collection of pixels. Each has a value for each feature attrib. E.g., RSI dataset above has 320 rows and 320 cols of pixels (102,400 pixels) and 4 feature attributes (B,G,R,Y). The (B,G,R) feature bands are in the TIFF image and the Y feature is color coded in the Yield Map. • Existing formats • BSQ (Band Sequential) • BIL (Band Interleaved by Line) • BIP (Band Interleaved by Pixel) • New format • bSQ (bit Sequential)

  22. Data Mining in Genomics • There is (will be?) an explosion of gene expression data. • Current emphasis is on extracting meaningful information from huge raw data sets. • Consistent data store and the use of P-trees to facilitate Association Rule Mining as well as Clustering / Classification will facilitate the extract of information and answers from raw data on demand. • Microarray data is most often represented as a Gene Table, G(Gid, E1, E2, ., En) • where Gid is the gene identifier; E1…. En are the various treatments (or conditions or experiments) and the data values are gene expression levels (Excel spreadsheet). • A gene regulatory pathway component can be represented as an association rule, {G1..Gn}  Gm where {G1…Gn} is the antecedent & Gm is the consequent. • Currently, data-mining techniques concentrate on the Gene table - specifically, on finding clusters of genes that exhibit similar expression patterns under selected treatments • clustering the gene table

  23. ARM for Microarray Data (Contd.) • An alternate data format exits (called the “Experiment Table”.) T(Eid, G1, G2, …. , Gn) where Eid is the Experiment (or Treatment or Condition) identifier and G1…Gn are the gene identifiers. • Experiment tables are a convenient form for ARM of gene expression levels. • Goal is to mine for rules among genes by associating treatment table columns. GeneID ExpmtID . G1 G2 G3 G4 E1 …. …. …. …. Gene expression values E2 …. …. …. …. E3 …. …. …. …. E4 …. …. …. …. The form of the Experiment Table with binary values (coding only whether an expression level exceeds or does not_exceed a threshold) is identical to Market Basket Data, for which a wealth of Rule Mining techniques have been developed in the last 8-10 years.

  24. Gene Table Experiment Table G1 G2 G3 G4 E1 E2 E3 E4 E1 … …. …. … G1 … …. … … E2 … …. …. … G2 … …. … … G3 … …. … … E3 … …. …. … G4 … …. … … E4 … …. …. … Gene Table is usually given as a standard (MS excel) spreadsheet of gene expression levels coming from microarray experiements. It is a 2-D data cube which can be rotated (to the Experiment Table), rolledup, sliced, diced, drilled down, association rule mined etc.

  25. A Universal Format? • E.g., One large universal table with 5 dimensions based on MIAME standard? • E = Experimental design – Hybridisation Procedures • A = Array design • S = Samples • M = Measurements • N = Normalization Control for data mining across all experiments and genes?

  26. “MIAME HYPERCUBE“ (5-D Universal Gene Expression Cube) Gene expression values Cardinality is high, but compression will be substantial(next slide).

  27. MIAME HYPRCUBE rolled up onto (E,S) Gene E(Experiment) G1 G2 . . . Gn E1A1S1M1N1 . . E1A1S1M1Nn . . . EnAnSnMnNn 1 5 2 0… zeros 1 7 0... The non-zero blocks may occur off the diagonal. The Point: Massive but very sparse dataset! 90. 0 8 1 7 6 5... zeros 70. The AD (All Digital) implementation format for distributed P-tree processing is one in which the bit filter (mask) approach is universally adopted. If the dimension is 5 (as in the MIAME HYPERCUBE), the only operation is a 32-bit AND operation – which fits current commodity processors perfectly (32-bit registers!). A hardware drop-in 5-D P-tree AND card of standard PCs please!!! “Anyone? Anyone?..

  28. Market Basket ARM example • Identifying purchasing patterns • If a customer buys beer, s/he will buy chips (so shelve the chips near the beer?) • E.g., Boolean relation, R(Tid, Aspirin, Beer, Chips, Dates,..,Zippo) • Tid=transaction id (for a customer going thru checkout). In any field of a tuple there is a 1 if the customer has that product in his/er basket, else 0 (existence, not count). • Support and Confidence: Given itemsets, A and C, • Supp(A) = ratio of the number of trans supporting A over the total number of transs. • Supp(AC) = ratio of the number of customers buying AC over the total cust. • Conf(AC) = ratio of # of customers buying A  C over # of cust buying A = Supp(AC) / Supp(A) in list notation Thresholds • Frequent Itemsets = Support exceeds a min support threshold (minsupp). • Lk denotes the set of frequent k-itemsets (sets with k items in them). • High Confidence Rules = Confidence exceeds a min threshold (minconf).

  29. Lists versus Vectors in MBR • In most MBR, the table (relation) is T(Tid, Itemset) • Where Tid is the customer transaction id (during checkout) • Itemset is the set of items customer purchases (in Market Basket). • Itemset is a set and therefore T is non-First_Normal_Form • Therefore the bit-vector approach is usually taken in MBR: • BT(Tid, item1 item2 … itemn) • Itemset is expressed as a bit-vector, [0100101…1000] • where each item is assigned to a bit position and that bit is 1 if t-itemset contains that item and 0 otherwise. • The Vector version corresponds to the table model we have been using, with R(A1,…,An), ordering is by Tid and the Ai‘s are the items in an assigned order (the datatype of each is Boolean)

  30. Many-to-Many Relationships (M-M)(List vs. Vector model?) A M-M relationship between entities, E1 and E2, is modeled as a table, R(E1, E2List) where E2List is the list of all E2-occurrences related to the corresponding E1 occurrence. Or it is modeled as the “rotation” R’(E2, E1List) . Note that both tables are non-1NF! Non-1NF tables are difficult, so List model is typically transformed to the Vector model : R(E1, E2,1, E2,2, … , E2,n ) where each E2,j value is Boolean (1 iff that E2-occurrence is related to the E1 occurrence). This transformation and the APRIORI work done between 1992-present has made MBR a sea-change event. Walmart adopted it early to analyze and manage supply and Kmart did not. This year Walmart became the world largest company and Kmart filed for bankruptcy protection. Is it effective technology? Gene-to-Experiment and CustomerTrans-to-Item are M-M relationships – quite similar!

  31. Association Rule Example Each trans is a list(or bit vector) of items purchased by a customer in a visit): Support Count 3 2 2 1 1 1 minsupp=50%, minconf=50% Find the frequent itemsets: the sets of items that have minsupp A subset of a freq itemset must also be a freq itemset if {A, B} is freq itemset, {A} and {B} must be frequent APRIORI: Iteratively find frequent itemsets with size from 1 to k. Use the frequent itemsets to generate association rules. Ck will denote the candidate frequent k-itemsets Lk will denote the frequent k-itemsets. Suppose the items in Lk-1 are listed in an order Step 1: self-joining Lk-1 insert into Ck select p.item1, p.item2, …, p.itemk-1, q.itemk-1 from Lk-1 p, Lk-1 q where p.item1=q.item1,..,p.itemk-2=q.itemk-2, p.itemk-1 < q.itemk-1 Step 2: pruning • forall itemsets c in Ck do • forall (k-1)-subsets s of c do • if (s is not in Lk-1) delete c from Ck

  32. L3 C1 L1 C2 L2 C3 Database D C2 P1 2 //\\ 1010 P2 3 //\\ 0111 P3 3 //\\ 1110 P4 1 //\\ 1000 P5 3 //\\ 0111 Scan D Scan D Scan D Minsup=2 {123} pruned because {12} not frequent. {135} pruned because {15}not frequent.. • The P-ARM algorithm assumes a fixed value precision in all bands. • p-gen function for numeric spatial data differs from apriori-gen by using additional pruning. • AND_rootcount function is used to calculate Itemset counts directly by ANDing the appropriate basic Ptrees instead of scanning the transaction databases. P1^P2^P3 1 //\\ 0010 Build Ptrees: Scan D P1^P2 1 //\\ 0010 P1^P3 ^P5 1 //\\ 0010 P1^P3 2 //\\ 1010 P2^P3 ^P5 2 //\\ 0110 P1^P5 1 //\\ 0010 L2={13,23,25,35} L1={1,2,3,5} L3={235} P2^P3 2 //\\ 0110 P2^P5 3 //\\ 0111 P3^P5 2 //\\ 0110

  33. P-ARM versus Apriori • Compare with Apriori (classical method) and FP-growth (recently proposed). • Find all frequent itemsets, not just those containing Yield, for fairness. • The images are actual aerial TIFF images with synchronized yield maps. Scalability with number of transactions Scalability with support threshold • Identical results • P-ARM is more scalable for lower support thresholds. • P-ARM algorithm is more scalable to large spatial datasets. • 1320  1320 pixel TIFF- Yield dataset (total number of transactions is ~1,700,000). • 2-bits precision • Equi-length partition

  34. P-ARM versus FP-growth 17,424,000 pixels (transactions) Scalability with support threshold Scalability with number of trans • FP-growth = efficient, tree-based frequent pattern mining method (details later) • Identical results. • For a dataset of 100K bytes, FP-growth runs very fast. But for images of large size, P-ARM achieves better performance. • P-ARM achieves better performance in the case of low support threshold.

  35. P-cube and P-table of R(A1,…,An) Given R(A1, A2, A3 ), form P-trees for R Form the data cube, P-cube , of all TupleP-trees, PcubeR Applying Peano ordering to the P-cube cells, defines the P-table PtableR([],[0],[0.0]…[1],[1.0]... ) Quadrants are the feature attribute (column) names, listed in depth-first or pre-order. Can form P-trees on PtableR - What are they? - What is the relationship to the Haar wavelet low-pass tree? rc P(0,0,3) rc P(1,0,3) rc P(2,0,3) rc P(3,0,3) rc P(0,0,2) rc P(1,0,2) rc P(2,0,2) rc P(3,0,2) 1 5 0 0 11 rc P(0,0,1) rc P(1,0,1) rc P(2,0,1) rc P(3,0,1) 14 5 3 0 11 rc P313 5 0 5 17 0 0 0 0 11 10 rc P312 0 0 0 0 rc P(0,0,0) rc P(1,0,0) rc P(2,0,0) rc P(3,0,0) 00 10 rc P311 rc P323 0 0 0 0 0 0 0 0 10 01 rc P322 rc P(0,0,0) rc P(1,1,0) rc P(2,1,0) rc P(3,1,0) 1 0 0 0 01 01 rc P321 rc P333 1 0 0 0 0 0 0 0 A2 01 00 rc P332 11 rc P(0,2,0) rc P(1,2,0) rc P(2,2,0) rc P(3,2,0) 0 0 0 1 10 00 rc P331 00 01 10 10 0 0 0 0 00 00 01 10 01 rc P(0,3,0) rc P(1,3,0) rc P(2,3,0) rc P(3,3,0) A3 11 00 00 01 10 11 A1

  36. High Confidence Rules • Application areas on spatial data • Forest fires • Big ticket item buyer identification. • Gene function determination • Identification of agricultural pest infestations • Traditional algorithms are not suitable • Too many frequent itemsets in the case of low support threshold • P-tree  P-cube • Establish a very low minsupp though • To eliminate rules that result from noise and outliers • Eliminate redundant rules

  37. Confident Rule Mining Algorithm • Build the set of confident rules, C (initially empty) as follows: • Start with 1-bit values, 2 bands; • then 1-bit values and 3 bands; … • then 2-bit values and 2 bands; • then 2-bit values and 3 bands; … • . . . • At each stage defined above, do the following: • Find all confident rules by rolling-up the T-cube along each potential consequent set using summation. • Comparing these sums with the support threshold to isolate rule support sets with the minimum support. • Compare the normalized T-cube values (divide by the rolled-up sum) with the minimum confidence level to isolate the confident rules. • Place any new confident rule in C, but only if non-redundant.

  38. Example 25 15 2,0 2,1 32 40 19.2 24 5 19 sums 30 34 24 27.2 thresholds 1,0 1,1 • Assume minimum confidence threshold 80%, • minimum support threshold 10% • Start with 1-bit values and 2 bands, B1 and B2 C: B1={0} => B2={0} c = 83.3%

  39. Methods to Improve Apriori’s Efficiency • Hash-based itemset counting: A k-itemset whose corresponding hashing bucket count is below the threshold cannot be frequent • Transaction reduction: A transaction that does not contain any frequent k-itemset is useless in subsequent scans • Partitioning: Any itemset that is potentially frequent in DB must be frequent in at least one of the partitions of DB • Sampling: mining on a subset of given data, lower support threshold + a method to determine the completeness • Dynamic itemset counting: add new candidate itemsets only when all of their subsets are estimated to be frequent • The core of the Apriori algorithm: • Use frequent (k – 1)-itemsets to generate candidate frequent k-itemsets • Use database scan and pattern matching to collect counts for the candidate itemsets • The bottleneck of Apriori: candidate generation 1. Huge candidate sets: • 104 frequent 1-itemset will generate 107 candidate 2-itemsets • To discover frequent pattern of size 100, eg, {a1…a100}, need to generate 2100  1030 candidates. • 2. Multiple scans of database: (Needs (n +1 ) scans, n = length of the longest pattern)

  40. SMILEY Distributed P-Tree Architecture Synchronized dataset to be data mined (at some URL) (RSI, Genomic dataset, MBR dataset, …) R(A1,…,An) USE DATA MINING RESULTS DADI (Drag And Drop Interface) DVI (Data Visualization Interface) SMILEY, BisonBlast, BisonArray (ARM, Classification, Clustering Algorithm implementations) DCI (Data Capture Interface) DMI (Data Mining Interface) Basic P-Trees striped (distributed) across Beowulf cluster (MidAS) (Parameters: Qid striping level(s), fanout, implementation model…)

  41. BSM — A Bit Level Decomposition Storage ModelA model of query optimization of all types • Vertical partitioning has been studied within the context of both centralized database system as well as distributed ones. It is a good strategy when small numbers of columns are retrieved by most queries. The decomposition of a relation also permits a number of transactions to execute concurrently. Copeland et al presented an attribute level decomposition storage model (DSM) [CK85] storing each column of a relational table into a separate binary table. The DSM showed great comparability in performance. • Beyond attribute level decomposition, Wong et al further took the advantage of encoding attribute values using a small number of bits to reduce the storage space [WLO+85]. In this paper, we will decompose attributes of relational tables into bit position level, utilize SPJ query optimization strategy on them, store the query results in one relational table, finally data mine using our very good P-tree methods. • Our method offers these advantages: • (1) By vertical partitioning, we only need to read everything we need. This method makes hardware caching work really well and greatly increases the effectiveness of the I/O device. • (2) We encode attribute values into bit vector format, which makes compression easy to do. • (3) SPJ queries can be formulated as Boolean expressions, which facilitates fast implementation on hardware. • (4) Our model is fit not only for query processing but for data mining as well. • [CK85] G.Copeland, S. Khoshafian. A Decomposition Storage Model. Proc. ACM Int. Conf. on Management of Data (SIGMOD’85), pp.268-279, Austin, TX, May 1985. • [WLO+85] H. K. T. Wong, H.-F. Liu, F. Olken, D. Rotem, and L. Wong. Bit Transposed Files. • Proc. Int. Conf. on Very Large Data Bases (VLDB’85), pp.448-457, Stockholm, Sweden, 1985.

  42. SPJ Query Optimization Strategies - One-table Selections • There are two categories of queries in one-table selections: Equality Queries and Range Queries. Most techniques [WLO+85, OQ97, CI98] used to optimize them employ encoding schemes – equality encoding and range encoding. Chan and Ioannidis [CI99] defined a more general query format called interval query. An interval query on attribute A is a query of the form “x≤A≤y” or “NOT (x≤A≤y)”. It can be an equality query or a range query when x or y satisfies different kinds of conditions. • We defined interval P-trees in previous work [DKR+02], which is equivalent to the bit vectors of corresponding intervals. So for each restriction in the form above, we have one corresponding interval P-tree. The ANDing result of all the corresponding interval P-trees represents all the rows satisfy the conjunction of all the restriction in the where clause. • [CI98] C.Y. Chan and Y. Ioannidis. Bitmap Index Design and Evaluation. Proc. ACM Intl. Conf. on Management of Data (SIGMOD’98), pp.355-366, Seattle, WA, June 1998. • [CI99] C.Y. Chan and Y.E. Ioannidis. An Efficient Bitmap Encoding Scheme for Selection Queries. Proc. ACM Intl. Conf. on Management of Data (SIGMOD’99), pp.216-226, Philadephia, PA, 1999. • [DKR+02] Q. Ding, M. Khan, A. Roy, and W. Perrizo. The P-tree algebra. Proc. ACM Symposium Applied Computing (SAC 2002), pp.426-431, Madrid, Spain, 2002. • [OQ97] P. O’Neill and D. Quass. Improved Query Performance with Variant Indexes. Proc. ACM Int. Conf. on Management of Data (SIGMOD’97), pp.38-49, Tucson, AZ, May 1997.

  43. Select-Project-StarJoin (SPSJ) Queries A Select-Project-StarJoin query is a SPJ query in which there is one multiway join along with selections and projections typically there is a central fact relation to which several dimension relations are joined. The dimension relations can be viewed as points on a star centered on the fact relation. For example, given the Student (S), Course (C), and Enrollment (E) database shown below (note a bit encoding is shown in reduced font italics for certain attributes), take SPSJ query, SELECT S.s,S.name,C.name FROM S,C,E WHERE S.s=E.s AND C.c=E.c AND S.gen=M AND E.grade=A AND C.term=S S|s____|name_|gen| C|c____|name|st|term| E|s____|c____|grade | |0 000|CLAY |M 0| |0 000|BI |ND|F 0| |0 000|1 001|B 10| |1 001|THAIS|M 0| |1 001|DB |ND|S 1| |0 000|0 000|A 11| |2 010|GOOD |F 1| |2 010|DM |NJ|S 1| |3 011|1 001|A 11| |3 011|BAID |F 1| |3 011|DS |ND|F 0| |3 011|3 011|D 00| |4 100|PERRY|M 0| |4 100|SE |NJ|S 1| |1 001|3 011|D 00| |5 101|JOAN |F 1| |5 101|AI |ND|F 0| |1 001|0 000|B 10| |2 010|2 010|B 10| |2 010|3 011|A 11| |4 100|4 100|B 10| |5 101|5 101|B 10| The bSQ attributes are stored as follows (note, e.g., bit-1 of S.s has been labeled Ss1, etc.). Ss1 Ss2 Ss3 Sg Cc1 Cc2 Cc3 Ct Es1 Es2 Es3 Ec1 Ec2 Ec3 Eg1 Eg2 0011 0000 0101 0001 0011 0000 0101 0110 0000 0000 0011 0000 0010 1010 1101 0100 00 11 01 11 00 11 01 10 0000 1111 1100 0000 0111 1101 1011 1001 11 00 01 11 00 01 11 00 BSQ attributes stored as single attribute files: S.name C.name C.st |CLAY | |BI | |ND| |THAIS| |DB | |ND| |GOOD | |DM | |NJ| |BAID | |DS | |ND| |PERRY| |SE | |NJ| |JOAN | |AI | |ND|

  44. For character string attributes, LZW or some other run-length compression could be used to further reduce storage requirements. The compression scheme should be chosen so that any range of offset entries can be uncompressed independently of the rest. Each of these BSQ files would require only a few pages of storage, allowing the entire BSQ file to be brought into memory whenever any portion of it is needed, thus eliminating the need for indexes and paging. A bit mask is formed for each selection as follows. The bit mask for S.gen=M is just the complement of S.g (since M has been coded as 0), therefore mS=Sg'. Similarly, mC=Ct and mE=Eg1 AND Eg2. mS mC mE 1110 0110 0100 00 10 1001 00 Logically ANDing mE into the E.s and E.c attributes, reduces E.s and E.c as follows. Es1 Es2 Es3 Ec1 Ec2 Ec3 0 0 0 0 0 0 0 0 1 1 1 0 0 0 0 1 1 1 We note that the reduced S.s and C.c attributes would need to be reduced only when S.s and C.c are not already surrogates attributes. In E, each tuple is compared with the participation masks, mS and mC to eliminate non-participating (E.s, E.c) pairs. The (E.s, E,c) pairs in binary are (000, 000), (011, 001) and (010, 011) or in decimal, (0,0), (3,1) and (2,3). The mask mS and mC reveal that S.s=0,1,4 and C.c=1,2,4 are the participating values. Therefore (3,1) and (2,3) are non-participating pairs and can be eliminated. Therefore there is but one participating (E.s, E.c) pair, namely (0, 0). Therefore to answer the query only the S.name value at offset 0 and the E.name value at offset 0 need to be retrieved. The output is (0, CLAY, BI). To review, once the basic P-trees for the join and selection attributes have been processed to remove all non-participants, only the participating BSQ values need to be accessed. The basic P-trees files for the join and selection attributes would typically be striped across a cluster of nodes so that the AND operations could be done very quickly in a parallel fashion. Our implementation on a 16 node cluster of 266 MHz Pentium computers shows that any multiway AND operation can be done in a few milliseconds.

  45. Select-Project-Join (SPJ) Queries We deal with an example in which more than one join is required and there are more than one join attribute (bushy query tree). We organize our query trees using the "constellation" model in which one of the fact files is considered central and the others are points in a star around that central attribute. Each secondary star-point fact file can be the center of a "sub-star". We apply the selection masks first. Then we perform semi-joins from the boundary toward the central fact file. Finally we perform semi-joins back out again. The result is the full elimination of all non-participants. The following is an example of such a bushy query. Those details that are identical to the above are not repeated here. SELECT S.n,C.n,R.capacity FROM S,C,E,O,R WHERE S.s=E.s & C.c=O.c & O.o=E.o & O.r=R.r & S.gen=M & C.cred=2 & E.grade=A & R.capacity=10; In this query, O is taken as the central fact relation of the following database. S___________C___________ E_________________ O_______________ R_____________ |s |n|gen| |c |n|cred| |s |o |grade| |o |c |r | |r |capacity| |0 000|A|M 0| |0 00|B|1 01| |0 000|1 001|2 10| |0 000|0 00|0 01| |0 00|30 11| |1 001|T|M 0| |1 01|D|3 11| |0 000|0 000|3 11| |1 001|0 00|1 01| |1 01|20 10| |2 010|S|F 1| |2 10|M|3 11| |3 011|1 001|3 11| |2 010|1 01|0 00| |2 10|30 11| |3 011|B|F 1| |3 11|S|2 10| |3 011|3 011|0 00| |3 011|1 01|1 01| |3 11|10 01| |4 100|C|M 0| |1 001|3 011|0 00| |4 100|2 10|0 00| |5 101|J|F 1| |1 001|0 000|2 10| |5 101|2 10|2 10| Sn |2 010|2 010|2 10| |6 110|2 10|3 11| A |2 010|3 011|3 11| |7 111|3 11|2 10| T |4 100|4 100|2 10| S |5 101|5 101|2 10| Ss1 Ss2 Ss3 Sgen B 0011 0000 0101 0001 C Egrade1 Egrade2 Cn 00 11 01 11 J 1101 0100 Cc1 Cc2 Ccred1 Ccred2 B 1011 1001 00 01 01 11 D Es1 Es2 Es3 Eo1 Eo2 Eo3 11 00 11 01 11 10M 0000 0000 0011 0000 0010 1010 S 0000 1111 1100 0000 0111 1101 Rr1 Rr2 Rcap1 Rcap2 11 00 01 11 00 01 00 01 11 10 11 01 10 11 Oo1 Oo2 Oo3 Oc1 Oc2 Or1 Or2 0011 0000 0101 0011 0000 0001 1100 0011 1111 0101 0011 1101 0011 0110

  46. 2. Results in the following, Es1 Es2 Es3 Eo1 Eo2 Eo3 0 0 0 0 0 0 0 0 1 1 1 0 0 0 0 1 1 1 Rc1 Rc2 Cr1 Cr2 1 1 1 0 Oo1 Oo2 Oo3 Oc1 Oc2 Or1 Or2 0011 0000 0101 0011 0000 0001 1100 0011 1111 0101 0011 1101 0011 0110 1. Apply selection masks: mE mR mC 0100 00 00 1001 10 01 00 Es1 Es2 Es3 Eo1 Eo2 Eo3 0000 0000 0011 0000 0010 1010 0000 1111 1100 0000 0111 1101 11 00 01 11 00 01 Rc1 Rc2 Cr1 Cr2 11 10 01 11 10 11 11 10 Oo1 Oo2 Oo3 Oc1 Oc2 Or1 Or2 0011 0000 0101 0011 0000 0001 1100 0011 1111 0101 0011 1101 0011 0110 3. Apply join-attribute selection-masks externally to further reduce the P-trees: mS s=0,1,4 are the participants. mC c=3 is the only participant. mR r=2 is only participant 1110 00 00 00 01 10 Produces: Es1 Es2 Es3 Eo1 Eo2 Eo3 Rc1 Rc2 Cr1 Cr2 0 0 0 0 0 0 0 0 1 1 1 0 0 0 0 1 1 1 1 1 1 0 Oo1 Oo2 Oo3 Oc1 Oc2 Or1 Or2 0011 0000 0101 1 0 0011 1111 0101 1 1 1 0 4. Completing the elimination of newly discovered non-participants internally in each file, results in: Es1 Es2 Es3 Eo1 Eo2 Eo3 Rc1 Rc2 Cr1 Cr2 Oo1 Oo2 Oo3 Oc1 Oc2 Or1 Or2 0 0 0 0 0 0 1 0 1 1 1 0 1 1 1 1 1 1 0 5. Thus, s has to be 0 (000), o has to be 0 (000), c has to be 3 (11) and r has to be 2 (10). But o also has to be 7 (111). Since o cannot be both 0 and 7 there are no participants.

  47. DISTINCT Keyword, GROUP BY Clause, ORDER BY Clause, HAVING Clause and Aggregate Operations • Duplicate elimination after a projection (SQL DISTINCT keyword) is one of the most expensive operations in query optimisation. In general, it is as expensive as the join operation. However, in our approach, it can automatically be done while forming the output tuples (since that is done in a order). While forming all output records for a particular value of the ORDER BY attribute, duplicates can be easily eliminated without the need for an expensive algorithm. • The ORDER BY and GROUP BY clauses are very commonly used in queries and can require a sorting of the output relation. However, in our approach, if the central relation is chosen to be the one with the sort attribute and the surrogation is according to the attribute order (typically the case – always the case for numeric attributes), then the final output records can be put together and aggregated in the requested order without a separate sort step at no additional cost. Aggregation operators such as COUNT, SUM, AVG, MAX, and MIN can be implemented without additional cost during the output formation step and any HAVING decision can be made as output records are being composed, as well. • If the Count aggregate is requested by itself, we note that P-trees automatically provide the full counts for any predicate with just one multiway AND operation.

  48. The following example illustrates these points. SELECT DISTINCT C.c, R.capacity FROM S,C,E,O,R WHERE S.s=E.s AND C.c=O.c AND O.o=E.o AND O.r=R.r AND C.cred>1 AND (E.grade='B' OR E.grade='A') AND R.capacity>10 ORDER BY C.c; S___________C___________ E_________________ O_______________ R_____________ |s |n|gen| |c |n|cred| |s |o |grade| |o |c |r | |r |capacity| |0 000|A|M 0| |0 00|B|1 01| |0 000|1 001|2 10| |0 000|0 00|0 01| |0 00|30 11| |1 001|T|M 0| |1 01|D|3 11| |0 000|0 000|3 11| |1 001|0 00|1 01| |1 01|20 10| |2 010|S|F 1| |2 10|M|3 11| |3 011|1 001|3 11| |2 010|1 01|0 00| |2 10|30 11| |3 011|B|F 1| |3 11|S|2 10| |3 011|3 011|0 00| |3 011|1 01|1 01| |3 11|10 01| |4 100|C|M 0| |1 001|3 011|0 00| |4 100|2 10|0 00| |5 101|J|F 1| |1 001|0 000|2 10| |5 101|2 10|2 10| Sn |2 010|2 010|2 10| |6 110|2 10|3 11| A |2 010|3 011|3 11| |7 111|3 11|2 10| T |4 100|4 100|2 10| S |5 101|5 101|2 10| Ss1 Ss2 Ss3 Sgen B 0011 0000 0101 0001 C Egrade1 Egrade2 Cn 00 11 01 11 J 1101 0100 Cc1 Cc2 Ccred1 Ccred2 B 1011 1001 00 01 01 11 D Es1 Es2 Es3 Eo1 Eo2 Eo3 11 00 11 01 11 10M 0000 0000 0011 0000 0010 1010 S 0000 1111 1100 0000 0111 1101 Rr1 Rr2 Rcap1 Rcap2 11 00 01 11 00 01 00 01 11 10 11 01 10 11 Oo1 Oo2 Oo3 Oc1 Oc2 Or1 Or2 0011 0000 0101 0011 0000 0001 1100 0011 1111 0101 0011 1101 0011 0110 • Apply selection masks: mE =Egrade1 mR =Rcap1 mC =Ccred1 1101 11 01 1011 10 11 11

  49. results in, Es1 Es2 Es3 Eo1 Eo2 Eo3 Rr1 Rr2 Cc1 Cc2 00 0 00 0 00 1 00 0 00 0 10 0 00 01 0 1 0 00 1 11 1 00 0 00 0 11 1 01 1 0 11 01 11 00 01 11 00 01 Semijoin (toward center), EO(on o=0,1,2,3,4,5), RO(on r=0,1,2), CO(on c=1,2,3), reduces Oo1 Oo2 Oo3 Oc1 Oc2 Or1 Or2 0011 0000 0101 0011 0000 0001 1100 0011 1111 0101 0011 1101 0011 0110 to Oo1 Oo2 Oo3 Oc1 Oc2 Or1 Or2 11 00 01 11 00 01 00 00 11 01 00 11 00 01 Thus, the participants are c=1,2; r=0,1,2; o=2,3,4,5. Semijoining back again produces the following. Cc1 Cc2 Rr1 Rr2 Es1 Es2 Es3 Eo1 Eo2 Eo3 0 1 00 01 00 11 00 00 11 01 1 0 1 0 11 00 01 11 00 01 Thus, s partic are s=2,4,5. Ss1 Ss2 Ss3 11 00 01 0 1 0  Output tuples are determined from participating O.c P-trees.RC(PO.c(2)) = RC(Oc1^Oc2’)=2, since Oc1 ^ Oc2’ 11 11 = 11 00 00 00Since the 1-bits are in positions 4 and 5, the two O-tuples have O.o surrogate values 4 and 5. The r-values at positions 4 and 5 of O.r are 0 and 2. Thus, we retrieve the R.capacity values at offsets 0 and 2. However, both of these R.capacity values are 30. Thus, this duplication is discovered without sorting or additional processing. The only output is (2,30). Similarly, RCntPO.c(1) = RCntOc1’^Oc2=2, Oc1’ ^ Oc2 00 00 = 00 11 11 11 Finally note, if ORDER BY clause is over an attribute which is not in the relation O (e.g., over student number, s) then we center the query tree (or wheel) on a fact file that contains the ORDER BY attribute (e.g., on E in this case). If the ORDER BY attribute is not in any fact file (in a dimension file only) then the final query tree can be re-arranged to center on the dimension file containing that attribute.  Since output ordering and duplicate elimination are traditionally very expensive sub-operations of SPJ query processing, the fact that our BDM model and the P-tree data structure provide a fast and efficient way to accomplish these operations is a very favorable aspect of the approach.

  50. Combining Data Mining and Query Processing • Many data mining request involve pre-selection, pre-join, and pre-projection of a database to isolate the specific data subset to which the data mining algorithm is to be applied. For example, in the above database, one might be interested in all Association Rules of a given support threshold and confidence threshold across all the relations of the database. The brute force way to do this is to first join all relations into one universal relation and then to mine that gigantic relation. This is not a feasible solution in most cases due to the size of the resulting universal relation. Furthermore, often some selection on that universal relation is desirable prior to the mining step. • Our approach accommodates combinations of querying and data mining without necessitation the creation of a massive universal relation as an intermediate step. Essentially, the full vertical partitioning and P-trees provide a selection and join path which can be combined with the data mining algorithm to produce the desired solution without extensive processing and massive space requirements. The collection of P-trees and BSQ files constitute a lossless, compressed version of the universal relation. Therefore the above techniques, when combined with the required data mining algorithm can produce the combination result very efficiently and directly.

More Related