1 / 52

Mining Multiple-level Association Rules in Large Databases

Mining Multiple-level Association Rules in Large Databases. IEEE Transactions on Knowledge and Data Engineering , 1999 . Authors : Jiawei Han Simon Fraser University, British Columbia. (now: University of Illinois Urbana-Champaign, CS Dept) Yongjian Fu

coy
Download Presentation

Mining Multiple-level Association Rules in Large Databases

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Mining Multiple-level Association Rules in Large Databases IEEE Transactions on Knowledge and Data Engineering, 1999 Authors : Jiawei Han Simon Fraser University, British Columbia. (now: University of Illinois Urbana-Champaign, CS Dept) Yongjian Fu University of Missouri-Rolla, Missouri. (now: Simon Fraser University, British Columbia) Story : Ramesh Kumar Adapted from screenplayby ZhenyuLu(2007)

  2. OUTLINE • Introduction • Multiple-Level Association Rules • A Method For Mining M-L Association Rules • Performance Study • Mining Cross-Level Associations • Interesting Association Rules • Conclusions and Future Work • Exam Questions

  3. INTRODUCTION • Why Multiple-Level (ML) Association Rules • Pre-requisites for M-L Data Mining (MLDM *) • Possible Approaches and Rationale • Assumptions • How this differs from previous research *MLDM=Multiple-Level Data Mining

  4. WHY MLDM? • Rule A=> 70% of customers who bought diapers also bought beer • Rule B =>45% of customers who bought cloth diapers also bought dark beer • Rule C =>35% of customers who bought Pampers also bought Samuel Adams beer.

  5. WHY MLDM? • What are the conceptual differences between the three rules? • Rule A applies at a generic higher level of abstraction (product) • Rule B applies at a more specific level of abstraction (category) • Rule C applies at the lowest level of abstraction (brand). This process is called drilling down.

  6. WHY MLDM? • More specific information at lower levels • Hence essential to mine at different levels for any tree • Different levels of association rules enable different strategies • Helps to factor out uninteresting or coincidental rules

  7. PRE-REQUISITES FOR MLDM • Data Representation At Different Levels Of Abstraction • Level 1: {DIAPERS, BEER} • Level 2: {CLOTH,DISPOSABLE} {REGULAR,LITE}Level 3: {‘BUMKINS’,’KUSHIES’,’PAMPERS’,’HUGGIES’} {‘BUDWEISER’, ‘MILLER LITE’, ‘SAMUEL ADAMS’, ‘HEINIKEN’} • Efficient Methods for ML Rule Mining(focus of this paper)

  8. HIERARCHY TYPES • Generalization Specialization (isa relationships) • Generalization/Specialization With Multiple Inheritance • Whole-Part hierarchies (is-part-of; has-part)

  9. Vehicle 2-Wheels 4-Wheels Sedan SUV Bicycle Motorcycle GENERALIZATION-SPECIALIZATION

  10. Vehicle Recreational Commuting Car Bicycle Snowmobile GENERALIZATION-SPECIALIZATION WITH MULTIPLE INHERITANCE

  11. Computer Hard Drive Motherboard RAM CPU RW Head Platter WHOLE-PART HIERARCHIES

  12. FOCUS OF THE PAPER Determining Efficient Mining Of Multiple-Level Association Rules

  13. DIFFERENT METHODS • Apply single-level Apriori Algorithm to each of the multiple levels under the same miniconf and minisup. • Potential Problems? • Higher Levels of abstraction have higher support and lower levels have lower support • What is the single most optimum minisup for all levels? • Too high a minisup => not too many itemsets for lower levels • Too low a minisup => far too many uninteresting rules

  14. POSSIBLE SOLUTIONS • Have different minisup for different levels • Maybe also different miniconf at different levels • Progressively decrease minisup as we go down the tree to lower levels • This paper studies a progressive deepening method developed by extension of the Apriori Algorithm

  15. MAIN ASSUMPTION • Explore only descendants of frequent items at any level. • In other words, if an item is non-frequent at one level, its descendants no longer figure in further analysis ARE THERE ANY PROBLEMS THAT CAN ARISE BECAUSE OF THIS ASSUMPTION?

  16. MAIN ASSUMPTIONExplore only descendants of frequent items at any level What is a potential problem with this approach? • Will this potentially eliminate possible interesting rules for itemsets at one level whose ancestors are not frequent at higher levels? • If so, can be addressed by 2 workarounds • 2 minisup values at higher levels – one for filtering infrequent items, the other for passing down frequent items to lower levels; latter called level passage threshold (lph) • The lph may be adjusted by user to allow descendants of sub frequent items

  17. DIFFERENCES FROM PREVIOUS RESEARCH • Other studies use same minisup across different levels of the hierarchy • This study…. • Uses different minisup values at different levels of the hierarchy • Analyzes different optimization techniques • Studies the use of interestingness measures

  18. MULTIPLE LEVEL ASSOCIATION RULES • Definitions • Example Taxonomy • Working of the algorithm

  19. DEFINITIONS-1 Assume Database contains: • Item dataset containing item description: {<Ai>,<description>} • A transaction dataset T containing set of transactions..{Tid,{Ap…Aq}} where Tid is a transaction identifier(key)

  20. DEFINITION 2.1 A pattern or an itemset A is one item Ai or a set of conjunctive items Ai Λ …. Λ Aj. The support of a pattern is the number of transactions that contain A vs. the total number of transactions σ(A|S) The confidence of a rule A => B in S is given by:φ(A=>B) = σ(AUB)/ σ(A) ie. it is the conditional probability of B occurring given that A has occurred Specify 2 thresholds minisup(σ’) and miniconf (φ’); different values at different levels

  21. DEFINITION 2.2 • pattern A is frequent in set S at level l if : the support of A is no less than the corresponding minimum support threshold σ’ • rule A => B is strong for a set S, if: • each ancestor of every item in A and B is frequent at its corresponding level • AΛB is frequent at the current level and • the confidence of A => B is no less than the miniconf at that level This ensures that the patterns examined at the lower levels arise from itemsets that have a high support at higher levels

  22. HOW DOES THE METHOD WORK? Example: Find multiple-level strong association rules for purchase patterns related to category, content and brand • Retrieve relevant data from TABLE 2 and merge into a generalized sales_item table with their relevant bar codes replaced by a bar_code set as in TABLE 3 • Find frequent patterns and strong rules at highest level. 1-item, 2-item,…k-item itemsets may be discovered of the form {bread, vegetable, milk,…} • At the next level the process is repeated but the itemsets will be more specific ex: {2% milk, lettuce, white bread} • Repeat steps 2 to 3 at all levels until no more FPs

  23. OUTLINE • Introduction • Multiple-Level Association Rules • An Algorithm For Mining M-L Association Rules • Performance Study • Mining Cross-Level Associations • Presentation Of Interesting Association Rules • Conclusions and Future Work • My Own Conclusions • Exam Questions

  24. ALGORITHM: Taxonomy For This Exercise food < ----------- L1 ------------ > < ----------- L1 ------------ > milk bread < ------L2------ > < ------L2------ > chocolate 2% wheat white < -----------L3------------- > < -----------L3------------- > Dairyland Foremost Old Mills Wonder L1, L2 and L3 correspond to the 3 levels of the hierarchy

  25. ALGORITHM: Dataset For This Exercise TABLE 1Sales-transaction Table TABLE 2sales_item (Description) Relation TABLE 3Generalized sales_item Description Table

  26. ALGORITHM:Explanation Of GID G I D = 1 1 2 Foremost 2% Milk Level 3Item 2 ‘Foremost’ Level 1Item 1‘Milk’ Level 2Item 1 ‘2%’

  27. ALGORITHM: Encoded Transaction TableT[1]

  28. L[1,1] Itemset Support {1**} 5 {2**} 5 ALGORITHM : Step 1 T [1] Level 1 MiniSup = 4 Level 1 frequent 1-item itemsets

  29. FilteredT [2] only keep items in L[1,1] from T[1] TID Items T1 {111,121,211,221} L[1,1] T2 {111,211,222} Itemset Support T3 {112,122,221} {1**} 5 T4 {111,121} {2**} 5 T5 {111,122,211,221} T6 {211} ALGORITHM: Step 2 L [1, 2]

  30. Itemset Support Filtered T [2] {11*,12*} 4 {11*,21*} 3 TID Items {11*,22*} 4 Itemset Support T1 {111,121,211,221} {12*,22*} 3 {11*} 5 {21*,22*} 3 T2 {111,211,222} {12*} 4 T3 {112,122,221} {21*} 4 T4 {111,121} {22*} 4 Itemset Support T5 {111,122,211,221} {11*,12*,22*} 3 {11*,21*,22*} 3 T6 {211} L[2,1] ALGORITHM : Step 3 Level-2 minsup = 3 L[2,2] L[2,3]

  31. L(3,1) Itemset Support Itemset Support Filtered T [2] {111} 4 {111,211} 3 {211} 4 TID Items {221} 3 T1 {111,121,211,221} T2 {111,211,222} T3 {112,122,221} T4 {111,121} T5 {111,122,211,221} T6 {211} Algorithm: Level 3 OpsLevel 3 Minisup=3 L(3,2)

  32. This Leads Us To The Following Algorithm…..

  33. Algorithm ML_T2L1 • Purpose: To find multiple-level frequent item sets for mining strong association rules in a transaction database • Input • T[1]: a hierarchy-information encoded transaction table of form <TID,Item-set> • minisup threshold for each level L in the form: (minsup[L]) • Output: Multiple-level frequent item set

  34. Algorithm Variations • Algorithm ML_T1LA • Use only the first encoded transaction table T[1]. • No filtered T[2] is generated during the processing • Support for the candidate sets at all levels computed at the same time by scanning T[1] once. • pros: Only one table and maximum k scans (k-itemsets) • cons: May consist of infrequent items and requires large space to keep track of all candidates

  35. Algorithm Variations • Algorithm ML_TML1 • Generate multiple encoded transaction tables T[1],…,T[max_l+1] • New table T[l+1] generated at each level l by filtering out infrequent items • T[l+1] is used for next generation of itemsets as well as T[l+2] • Pros: May save substantial amount of processing • Cons: Can be inefficient if only a few items are filtered out at each level processed.

  36. Algorithm Variations • Algorithm ML_T2LA • Uses 2 encoded transaction tables as in ML_T2L1 algorithm ieT[1] and T[2] • Support for the candidate sets at all levels computed at the same time by scanning T[2] once • Avoids generation of successive filtered tables T[l+1] at each level • Scans T[1] twice to generate T[2] • Then scans T[2] once for generation of each frequent k-itemset • Pros: Potentially efficient if T[2] consists of much fewer items than T[1]. • Cons: ??

  37. Performance Study • Assumptions: • The maximal level in concept hierarchy is 3 • Randomized Transaction Generation Algorithm used to generate a set of synthetic databases • I=1000; L=number of potentially frequent itemsets=2000; D=total number of transactions=100,000 • Use two data settings: DB1 (Average frequent item length = 4 and Average transaction size =10) and DB2 (Average frequent item length = 6 and Average transaction size =20)

  38. Performance Study Average frequent item length = 4 Average transaction size =10 Average frequent item length = 6 Average transaction size =20 minisup[2] = 3% minisup[3] = 1% minisup[2] = 2% minisup[3] = 0.75%

  39. Performance Study Average frequent item length = 4 Average transaction size =10 Average frequent item length = 6 Average transaction size =20 minisup[1] = 55% minisup[3] = 1% minisup[1] = 60% minisup[3] = 0.75%

  40. Performance Study minisup[1] = 60% minisup[2] = 2% minisup[1] = 55% minisup[2] = 3%

  41. Performance Study Conclusions: • Relative performance of the four algorithms is highly relevant to the threshold setting (i.e., the power of a filter at each level). • Parallel derivation of L(l,k) is useful and deriving a transaction table T[2] is usually beneficial. • ML_T1LA is found to be the BEST or the second best algorithm.

  42. Cross-level association food food expand milk bread milk bread m1 m2 b1 b2 m1 m2 b1 b2 mine rules like milk => bread and m2 => b1 mine rules like milk => b1

  43. Cross-level association Two adjustments: • A single minisup is used at all levels • When the frequent k-itemsets are generated, items at all levels are considered, itemsets which contain an item and its ancestor are excluded

  44. Cross-level association Results • Two algorithms implemented ML_T2LA-C and ML_T1LA-C • Both algorithms are an order of magnitude slower than their counterparts for minisup ranging from 1% to 5% • When the frequent k-itemsets are generated, items at all levels are considered, itemsets which contain an item and its ancestor are excluded Results • Two algorithms implemented ML_T2LA-C and ML_T1LA-C • Both algorithms are an order of magnitude slower than their counterparts for minisup ranging from 1% to 5% • When the frequent k-itemsets are generated, items at all levels are considered, itemsets which contain an item and its ancestor are excluded

  45. Filtering of uninteresting association rules • Redundant Rules • Redundant Rule = a rule that can be computed from a higher level rule. • Example: • milk bread(12% sup, 85% con) • Chocolate milk bread(1% sup, 84% con) • Not interesting if 8% of milk is chocolate milk • Removal of redundant rules: • To remove redundant rules, when a rule R passes the minimum confidence test, it is checked against every strong rule R' , of which R is a descendant. If the confidence of R, (R), falls in the range of the expected confidence with the variation of , it is removed. • Can reduce rules by 30% to 60%

  46. Filtering of uninteresting association rules • Unnecessary Rules • Unnecessary Rule = a rule that provides little extra information from a previous rule • Example: • 80% of customers who buy milk  buy bread • 80% of customers who buy milk + butter breadLittle Information gained here • Definition 6.2 A rule R, AΛC => B is unnecessary if there is a rule R’ , A => B, and φ (R) ∈ [φ (R’) – β, φ (R’) + β ]where β is a user-defined constant and A, B and C are itemsets

  47. Filtering of uninteresting association rules (continued) • Removal of unnecessary rules: • To filter out unnecessary association rules, for each strong rule R: A => B, we test every such rule R’ : A ‑ C => B, where C ⊂ A. If the confidence of R, (R), is not significantly different from that of R' =(R' ),R is not presented to the user. • Experiments reveal that this reduces rules by 50% to 80%

  48. CONCLUSIONS • Extended the association rules from single-level to multiple-level. • A top-down progressive deepening technique is developed for mining multiple-level association rules. • Filtering of uninteresting association rules

  49. FUTURE WORK • Can study developing efficient algorithms for mining multiple-level sequential patterns • Another interesting issue is the study of mining multiple-level correlations in databases

  50. Exam Question 1 • What is a major drawback to multiple-level data mining using the same minisup at all levels of a concept hierarchy? A. Large support exists at higher levels of the hierarchy; smaller support at lower levels. In order to insure that sufficiently strong association rules are generated at the lower levels, we must reduce the support at higher levels which, in turn, could result in generation of many uninteresting rules at higher levels. Thus we are faced with the problem of determining which is the optimal minisup at all levels

More Related