1 / 46

Privacy preserving data mining – randomized response and association rule hiding

Privacy preserving data mining – randomized response and association rule hiding. Li Xiong CS573 Data Privacy and Anonymity. Partial slides credit: W. Du, Syracuse University, Y. Gao, Peking University. Privacy Preserving Data Mining Techniques. Protecting sensitive raw data

mayes
Download Presentation

Privacy preserving data mining – randomized response and association rule hiding

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Privacy preserving data mining – randomized response and association rule hiding Li Xiong CS573 Data Privacy and Anonymity Partial slides credit: W. Du, Syracuse University, Y. Gao, Peking University

  2. Privacy Preserving Data Mining Techniques • Protecting sensitive raw data • Randomization (additive noise) • Geometric perturbation and projection (multiplicative noise) • Randomized response technique • Categorical data perturbation in data collection model • Protecting sensitive knowledge (knowledge hiding)

  3. Data Collection Model Data cannot be shared directly because of privacy concern

  4. Background:Randomized Response The true answer is “Yes” Do you smoke? Yes Head Biased coin: No Tail

  5. Decision Tree Mining using Randomized Response • Multiple attributes encoded in bits • Column distribution can be estimated for learning a decision tree! True answer E: 110 Head Biased coin: False answer !E: 001 Tail Using Randomized Response Techniques for Privacy-Preserving Data Mining, Du, 2003

  6. Accuracy of Decision tree built on randomized response

  7. Generalization for Multi-Valued Categorical Data Si Si+1 Si+2 Si+3 q1 q2 q3 q4 True Value: Si M

  8. A Generalization • RR Matrices [Warner 65], [R.Agrawal 05], [S. Agrawal 05] • RR Matrix can be arbitrary • Can we find optimal RR matrices? OptRR:Optimizing Randomized Response Schemes for Privacy-Preserving Data Mining, Huang, 2008

  9. What is an optimal matrix? • Which of the following is better? Privacy:M2is better Utility:M1is better So, what is an optimal matrix?

  10. Optimal RR Matrix • An RR matrix M is optimal if no other RR matrix’s privacy and utility are both better than M (i, e, no other matrix dominates M). • Privacy Quantification • Utility Quantification • A number of privacy and utility metrics have been proposed. • Privacy: how accurately one can estimate individual info. • Utility: how accurately we can estimate aggregate info.

  11. Metrics • Privacy: accuracy of estimate of individual values • Utility: difference between the original probability and the estimated probability

  12. Optimization Methods • Approach 1: Weighted sum: w1 Privacy + w2 Utility • Approach 2 • Fix Privacy, find M with the optimal Utility. • Fix Utility, find M with the optimal Privacy. • Challenge: Difficult to generate M with a fixed privacy or utility. • Proposed Approach: Multi-Objective Optimization

  13. Optimization algorithm • Evolutionary Multi-Objective Optimization (EMOO) • The algorithm • Start with a set of initial RR matrices • Repeat the following steps in each iteration • Mating: selecting two RR matrices in the pool • Crossover: exchanging several columns between the two RR matrices • Mutation: change some values in a RR matrix • Meet the privacy bound: filtering the resultant matrices • Evaluate the fitness value for the new RR matrices. Note : the fitness values is defined in terms of privacy and utility metrics

  14. Illustration

  15. Output of Optimization • The optimal set is often plotted in the objective space as Pareto front. Worse M6 M5 M4 M8 M7 M3 M2 Utility M1 Better Privacy

  16. For First attribute of Adult data

  17. Privacy Preserving Data Mining Techniques Protecting sensitive raw data Randomization (additive noise) Geometric perturbation and projection (multiplicative noise) Randomized response technique Protecting sensitive knowledge (knowledge hiding) Frequent itemset and association rule hiding Downgrading classifier effectiveness

  18. Frequent Itemset Mining and Association Rule Mining • Frequent itemset mining: frequent set of items in a transaction data set • Association rules: associations between items

  19. Frequent Itemset Mining and Association Rule Mining • First proposed by Agrawal, Imielinski, and Swami in SIGMOD 1993 • SIGMOD Test of Time Award 2003 “This paper started a field of research. In addition to containing an innovative algorithm, its subject matter brought data mining to the attention of the database community … even led several years ago to an IBM commercial, featuring supermodels, that touted the importance of work such as that contained in this paper. ” • Apriori algorithm in VLDB 1994 • #4 in the top 10 data mining algorithms in ICDM 2006 R. Agrawal, T. Imielinski, and A. Swami. Mining association rules between sets of items in large databases. In SIGMOD ’93. Apriori: Rakesh Agrawal and Ramakrishnan Srikant. Fast Algorithms for Mining Association Rules. In VLDB '94.

  20. Customer buys both Customer buys diaper Customer buys beer Basic Concepts: Frequent Patterns and Association Rules • Itemset: X = {x1, …, xk} (k-itemset) • Frequent itemset: X with minimum support count • Support count (absolute support): count of transactions containing X • Association rule: A  B with minimum support and confidence • Support: probability that a transaction contains A  B s = P(A  B) • Confidence: conditional probability that a transaction having A also contains B c = P(A | B) • Association rule mining process • Find all frequent patterns (more costly) • Generate strong association rules

  21. Illustration of Frequent Itemsets and Association Rules • Frequent itemsets (minimum support count = 3) ? • Association rules (minimum support = 50%, minimum confidence = 50%) ? • {A:3, B:3, D:4, E:3, AD:3} A  D (60%, 100%) D  A (60%, 75%)

  22. SIGMOD Ph.D. Workshop IDAR’07 Association Rule Hiding: what? why?? • Problem: hide sensitive association rules in data without losing non-sensitive rules • Motivations:confidential rules may have serious adverse effects

  23. SIGMOD Ph.D. Workshop IDAR’07 Problem statement • Given • a database Dto be released • minimum threshold “MST”, “MCT” • a set of association rules R mined from D • a set of sensitive rules RhR to be hided • Find a new database D’such that • the rules in Rh cannot be mined from D’ • the rules in R-Rh can still be mined as many as possible

  24. SIGMOD Ph.D. Workshop IDAR’07 Solutions • Data modification approaches • Basic idea: data sanitization D->D’ • Approaches: distortion,blocking • Drawbacks • Cannot control hiding effects intuitively, lots of I/O • Data reconstruction approaches • Basic idea: knowledge sanitization D->K->D’ • Potential advantages • Can easily control the availability of rules and control the hiding effects directly, intuitively, handily

  25. Distortion Algorithm Distortion-based Techniques Sample Database Distorted Database Rule A→C has: Support(A→C)=80% Confidence(A→C)=100% Rule A→C has now: Support(A→C)=40% Confidence(A→C)=50%

  26. Side Effects

  27. Distortion-based Techniques • Challenges/Goals: • To minimize the undesirable Side Effects that the hiding process causes to non-sensitive rules. • To minimize the number of 1’s that must be deleted in the database. • Algorithms must be linear in time as the database increases in size.

  28. Sensitive itemsets: ABC

  29. Data distortion [Atallah 99] • Hardness result: • The distortion problem is NP Hard • Heuristic search • Find items to remove and transactions to remove the items from Disclosure Limitation of Sensitive Rules, M. Atallah, A. Elmagarmid, M. Ibrahim, E. Bertino, V. Verykios, 1999

  30. Heuristic Approach • A greedy bottom-up search through the ancestors (subsets) of the sensitive itemset for the parent with maximum support (why?) • At the end of the search, 1-itemset is selected • Search through the common transactions containing the item and the sensitive itemset for the transaction that affects minimum number of 2-itemsets • Delete the selected item from the identified transaction

  31. Results comparison

  32. Blocking Algorithm Blocking-based Techniques Initial Database New Database Support and Confidence becomes marginal. In New Database: 60% ≤ conf(A → C) ≤ 100%

  33. SIGMOD Ph.D. Workshop IDAR’07 1 . Frequent Set Mining R 2 . Perform sanitization Algorithm 3 . FP - tree - based Inverse Frequent Set Mining ’ FS - R Rh Data reconstruction approach FS D D D ’ FP - tree

  34. The first two phases • 1. Frequent set mining • Generate all frequent itemsets with their supports and support counts FS from original database D • 2. Perform sanitization algorithm • Input: FS output in phase 1, R, Rh • Output: sanitized frequent itemsets FS’ • Process • Select hiding strategy • Identify sensitive frequent sets • Perform sanitization In best cases, sanitization algorithm can ensure from FS’ ,we can exactly get the non-sensitive rules set R-Rh SIGMOD Ph.D. Workshop IDAR’07

  35. SIGMOD Ph.D. Workshop IDAR’07 F r e q u e n t I t e m s e t s : F S A s s o c i a t i o n R u l e s : R A : 6 1 0 0 % B : 4 6 6 % c o n f i d - r u l e s s u p p o r t C : 4 6 6 % e n c e σ = 4 D : 4 6 6 % Þ B A 1 0 0 % 6 6 % Þ C A 1 0 0 % 6 6 % A B : 4 6 6 % M S T = 6 6 % A C : 4 6 6 % Þ M C T = 7 5 % D A 1 0 0 % 6 6 % A D : 4 6 6 % A : 6 1 0 0 % C : 4 6 6 % c o n f i d - r u l e s s u p p o r t D : 4 6 6 % e n c e Þ A C 1 0 0 % 6 6 % A C : 4 6 6 % A D Þ 1 0 0 % 6 6 % A D : 4 6 6 % F r e q u e n t I t e m s e t s : F S ' A s s o c i a t i o n R u l e s : R - R h Example: the first two phases O i g i n a l D a t a b a s e : D 1. Frequent set mining T I D I t e m s T 1 A B C E T 2 A B C T 3 A B C D T 4 A B D T 5 A D T 6 A C D 2. Perform sanitization algorithm

  36. Open research questions • Optimal solution • Itemsets sanitization • The support and confidence of the rules in R- Rh should remain unchanged as much as possible • Integrating data protection and knowledge (rule) protection

  37. Coming up • Cryptographic protocols for privacy preserving distributed data mining

  38. Hide rules Hide large itemsets Data modification Data- Distortion Algo1a Algo1b Algo2a WSDA PDA Algo2b Algo2c Naïve MinFIA MaxFIA IGA RRA RA SWA Border-Based Integer-Programing Sanitization-Matrix Data- Blocking CR CR2 GIH Data reconstruction CIILM Classification of current algorithms

  39. Weight-based Sorting Distortion Algorithm (WSDA) [Pontikakis 03] • High Level Description: • Input: • Initial Database • Set of Sensitive Rules • Safety Margin (for example 10%) • Output: • Sanitized Database • Sensitive Rules no longer hold in the Database

  40. WSDA Algorithm • High Level Description: • 1st step: • Retrieve the set of transactions which support sensitive rule RS • For each sensitive rule RS find the number N1 of transaction in which, one item that supports the rule will be deleted

  41. WSDA Algorithm • High Level Description: • 2nd step: • For each rule Ri in the Database with common items with RScompute a weight w that denotes how strong is Ri • For each transaction that supports RS compute a priority Pi, that denotes how many strong rules this transaction supports

  42. WSDA Algorithm • High Level Description: • 3rd step: • Sort the N1 transactions in ascending order according to their priority value Pi • 4th step: • For the first N1 transactions hide an item that is contained in RS

  43. WSDA Algorithm • High Level Description: • 5th step: • Update confidence and support values for other rules in the database

  44. SIGMOD Ph.D. Workshop IDAR’07 Proposed Solution Discussion • Sanitization algorithm • Compared with early popular data sanitization : performs sanitization directly on knowledge level of data • Inverse frequent set mining algorithm • Deals with frequent items and infrequent items separately: more efficiently, a large number of outputs Our solution provides user with a knowledge level window to perform sanitization handily and generates a number of securedatabases

More Related