1 / 80

Final Review

Final Review. Lei Chen. Classification Methods. Naïve Bayesian Networks. Towards Naïve Bayes Classifier. Let D be a training set of tuples and their associated class labels, and each tuple is represented by an n-D attribute vector X = (x 1 , x 2 , …, x n )

early
Download Presentation

Final Review

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Final Review Lei Chen

  2. Classification Methods • Naïve Bayesian Networks

  3. Towards Naïve Bayes Classifier • Let D be a training set of tuples and their associated class labels, and each tuple is represented by an n-D attribute vector X = (x1, x2, …, xn) • Suppose there are m classes C1, C2, …, Cm. • Classification is to derive the maximum posteriori, i.e., the maximal P(Ci|X) • This can be derived from Bayes’ theorem • Since P(X) is constant for all classes, only needs to be maximized

  4. Derivation of Naïve Bayes Classifier • A simplified assumption: attributes are conditionally independent (i.e., no dependence relation between attributes): • This greatly reduces the computation cost: Only counts the class distribution • If Ak is categorical, P(xk|Ci) is the # of tuples in Ci having value xk for Ak divided by |Ci, D| (# of tuples of Ci in D) • If Ak is continous-valued, P(xk|Ci) is usually computed based on Gaussian distribution with a mean μ and standard deviation σ and P(xk|Ci) is

  5. Naïve Bayes Classifier: Training Dataset Class: C1:buys_computer = ‘yes’ C2:buys_computer = ‘no’ Data to be classified: X = (age <=30, Income = medium, Student = yes Credit_rating = Fair)

  6. Naïve Bayes Classifier: An Example • P(Ci): P(buys_computer = “yes”) = 9/14 = 0.643 P(buys_computer = “no”) = 5/14= 0.357 • Compute P(X|Ci) for each class P(age = “<=30” | buys_computer = “yes”) = 2/9 = 0.222 P(age = “<= 30” | buys_computer = “no”) = 3/5 = 0.6 P(income = “medium” | buys_computer = “yes”) = 4/9 = 0.444 P(income = “medium” | buys_computer = “no”) = 2/5 = 0.4 P(student = “yes” | buys_computer = “yes) = 6/9 = 0.667 P(student = “yes” | buys_computer = “no”) = 1/5 = 0.2 P(credit_rating = “fair” | buys_computer = “yes”) = 6/9 = 0.667 P(credit_rating = “fair” | buys_computer = “no”) = 2/5 = 0.4 • X = (age <= 30 , income = medium, student = yes, credit_rating = fair) P(X|Ci) : P(X|buys_computer = “yes”) = 0.222 x 0.444 x 0.667 x 0.667 = 0.044 P(X|buys_computer = “no”) = 0.6 x 0.4 x 0.2 x 0.4 = 0.019 P(X|Ci)*P(Ci) : P(X|buys_computer = “yes”) * P(buys_computer = “yes”) = 0.028 P(X|buys_computer = “no”) * P(buys_computer = “no”) = 0.007 Therefore, X belongs to class (“buys_computer = yes”)

  7. Avoiding the Zero-Probability Problem • Naïve Bayesian prediction requires each conditional prob. be non-zero. Otherwise, the predicted prob. will be zero • Ex. Suppose a dataset with 1000 tuples, income=low (0), income= medium (990), and income = high (10) • Use Laplacian correction (or Laplacian estimator) • Adding 1 to each case Prob(income = low) = 1/1003 Prob(income = medium) = 991/1003 Prob(income = high) = 11/1003 • The “corrected” prob. estimates are close to their “uncorrected” counterparts

  8. Naïve Bayes Classifier: Comments • Advantages • Easy to implement • Good results obtained in most of the cases • Disadvantages • Assumption: class conditional independence, therefore loss of accuracy • Practically, dependencies exist among variables • E.g., hospitals: patients: Profile: age, family history, etc. Symptoms: fever, cough etc., Disease: lung cancer, diabetes, etc. • Dependencies among these cannot be modeled by Naïve Bayes Classifier • How to deal with these dependencies? Bayesian Belief Networks (Chapter 9)

  9. Clustering Algorithms • K-Means

  10. Partitioning Algorithms: Basic Concept • Partitioning method: Partitioning a database D of n objects into a set of k clusters, such that the sum of squared distances is minimized (where ci is the centroid or medoid of cluster Ci) • Given k, find a partition of k clusters that optimizes the chosen partitioning criterion • Global optimal: exhaustively enumerate all partitions • Heuristic methods: k-means and k-medoids algorithms • k-means (MacQueen’67, Lloyd’57/’82): Each cluster is represented by the center of the cluster • k-medoids or PAM (Partition around medoids) (Kaufman & Rousseeuw’87): Each cluster is represented by one of the objects in the cluster

  11. Clustering Algorithms • K-Means • K-Medoids

  12. PAM (Partitioning Around Medoids) (1987) • PAM (Kaufman and Rousseeuw, 1987), built in Splus • Use real object to represent the cluster • Select k representative objects arbitrarily • For each pair of non-selected object h and selected object i, calculate the total swapping cost TCih • For each pair of i and h, • If TCih < 0, i is replaced by h • Then assign each non-selected object to the most similar representative object • repeat steps 2-3 until there is no change

  13. Clustering Algorithms • K-Means • K-Medoids • Hierarchical Clustering

  14. Step 0 Step 1 Step 2 Step 3 Step 4 agglomerative (AGNES) a a b b a b c d e c c d e d d e e divisive (DIANA) Step 3 Step 2 Step 1 Step 0 Step 4 Hierarchical Clustering • Use distance matrix as clustering criteria. This method does not require the number of clusters k as an input, but needs a termination condition

  15. AGNES (Agglomerative Nesting) • Introduced in Kaufmann and Rousseeuw (1990) • Implemented in statistical packages, e.g., Splus • Use the single-link method and the dissimilarity matrix • Merge nodes that have the least dissimilarity • Go on in a non-descending fashion • Eventually all nodes belong to the same cluster

  16. Distance between Clusters Single link: smallest distance between an element in one cluster and an element in the other, i.e., dist(Ki, Kj) = min(tip, tjq) Complete link: largest distance between an element in one cluster and an element in the other, i.e., dist(Ki, Kj) = max(tip, tjq) Average: avg distance between an element in one cluster and an element in the other, i.e., dist(Ki, Kj) = avg(tip, tjq) Centroid: distance between the centroids of two clusters, i.e., dist(Ki, Kj) = dist(Ci, Cj) Medoid: distance between the medoids of two clusters, i.e., dist(Ki, Kj) = dist(Mi, Mj) Medoid: a chosen, centrally located object in the cluster X X 16

  17. Clustering Algorithms • K-Means • K-Medoids • Hierarchical Clustering • Density-based Clustering

  18. Density-Based Clustering: Basic Concepts • Two parameters: • Eps: Maximum radius of the neighbourhood • MinPts: Minimum number of points in an Eps-neighbourhood of that point • NEps(q): {p belongs to D | dist(p,q) ≤ Eps} • Directly density-reachable: A point p is directly density-reachable from a point qw.r.t. Eps, MinPts if • p belongs to NEps(q) • core point condition: |NEps (q)| ≥ MinPts p MinPts = 5 Eps = 1 cm q

  19. p q o Density-Reachable and Density-Connected • Density-reachable: • A point p is density-reachable from a point q w.r.t. Eps, MinPts if there is a chain of points p1, …, pn, p1 = q, pn = p such that pi+1 is directly density-reachable from pi • Density-connected • A point p is density-connected to a point q w.r.t. Eps, MinPts if there is a point o such that both, p and q are density-reachable from o w.r.t. Eps and MinPts p p1 q

  20. Outlier Border Eps = 1cm MinPts = 5 Core DBSCAN: Density-Based Spatial Clustering of Applications with Noise • Relies on a density-based notion of cluster: A cluster is defined as a maximal set of density-connected points • Discovers clusters of arbitrary shape in spatial databases with noise • A point is a core point if it has more than a specified number of points (MinPts) within Eps • These are points that are at the interior of a cluster • A border point has fewer than MinPts within Eps, but is in the neighborhood of a core point

  21. DBSCAN: The Algorithm • Arbitrary select a point p • Retrieve all points density-reachable from pw.r.t. Eps and MinPts • If p is a core point, a cluster is formed • If p is a border point, no points are density-reachable from p and DBSCAN visits the next point of the database • Continue the process until all of the points have been processed

  22. Clustering Algorithms • K-Means • K-Medoids • Hierarchical Clustering • Density-based Clustering • Fuzzy set-based Clustering • Measuring Clustering Quality

  23. Fuzzy (Soft) Clustering Example: Let cluster features be C1 :“digital camera” and “lens” C2: “computer“ Fuzzy clustering k fuzzy clusters C1, …,Ck ,represented as a partition matrix M = [wij] P1: for each object oiand cluster Cj, 0 ≤ wij≤1 (fuzzy set) P2: for each object oi, , equal participationin the clustering P3: for each cluster Cj, ensures there is no empty cluster Let c1, …, ck as the center of the k clusters For an object oi, sum of the squared error (SSE), p is a parameter: For a cluster Ci, SSE: Measure how well a clustering fits the data: 23

  24. The EM (Expectation Maximization) Algorithm The k-means algorithm has two steps at each iteration: Expectation Step(E-step): Given the current cluster centers, each object is assigned to the cluster whose center is closest to the object: An object is expected to belong to the closest cluster Maximization Step (M-step): Given the cluster assignment, for each cluster, the algorithm adjusts the center so that the sum of distance from the objects assigned to this cluster and the new center is minimized The (EM) algorithm: A framework to approach maximum likelihood or maximum a posteriori estimates of parameters in statistical models. E-step assigns objects to clusters according to the current fuzzy clustering or parameters of probabilistic clusters M-step finds the new clustering or parameters that maximize the sum of squared error (SSE) or the expected likelihood 24

  25. Initially, let c1 = a and c2 = b 1st E-step: assign o to c1,w. wt = Fuzzy Clustering Using the EM Algorithm • 1st M-step: recalculate the centroids according to the partition matrix, minimizing the sum of squared error (SSE) • Iteratively calculate this until the cluster centers converge or the change is small enough

  26. Clustering Algorithms • K-Means • K-Medoids • Hierarchical Clustering • Density-based Clustering • Fuzzy set-based Clustering • Probabilistic Model-Based Clustering • Measuring Clustering Quality

  27. Model-Based Clustering A set C of k probabilistic clusters C1, …,Ckwith probability density functions f1, …, fk, respectively, and their probabilities ω1, …, ωk. Probability of an object o generated by cluster Cjis Probability of o generated by the set of cluster Cis • Since objects are assumed to be generated independently, for a data set D = {o1, …, on}, we have, • Task: Find a set C of k probabilistic clusters s.t. P(D|C) is maximized • However, maximizing P(D|C) is often intractable since the probability density function of a cluster can take an arbitrarily complicated form • To make it computationally feasible (as a compromise), assume the probability density functions being some parameterized distributions 27

  28. Univariate Gaussian Mixture Model O = {o1, …, on} (n observed objects), Θ = {θ1, …, θk} (parameters of the k distributions), and Pj(oi| θj) is the probability that oi is generated from the j-th distribution using parameter θj, we have • Univariate Gaussian mixture model • Assume the probability density function of each cluster follows a 1-d Gaussian distribution. Suppose that there are k clusters. • The probability density function of each cluster are centered at μj with standard deviation σj, θj, = (μj, σj), we have 28

  29. Computing Mixture Models with EM Given n objects O = {o1, …, on}, we want to mine a set of parameters Θ = {θ1, …, θk} s.t.,P(O|Θ) is maximized, where θj = (μj, σj) are the mean and standard deviation of the j-th univariate Gaussian distribution We initially assign random values to parameters θj, then iteratively conduct the E- and M- steps until converge or sufficiently small change At the E-step, for each object oi, calculate the probability that oi belongs to each distribution, • At the M-step, adjust the parameters θj = (μj, σj) so that the expected likelihood P(O|Θ)is maximized 29

  30. Frequent Item Sets • Brute-Force Solution

  31. Frequent Itemset Generation • Brute-force approach: • Each itemset in the lattice is a candidate frequent itemset • Count the support of each candidate by scanning the database • Match each transaction against every candidate • Complexity ~ O(NMw) => Expensive since M = 2d!!!

  32. Frequent Itemset Generation Strategies • Reduce the number of candidates (M) • Complete search: M=2d • Use pruning techniques to reduce M • Reduce the number of transactions (N) • Reduce size of N as the size of itemset increases • Used by DHP and vertical-based mining algorithms • Reduce the number of comparisons (NM) • Use efficient data structures to store the candidates or transactions • No need to match every candidate against every transaction

  33. Frequent Item Sets • Brute-Force Solution • Apriori Property and Algorithm

  34. Reducing Number of Candidates • Apriori principle: • If an itemset is frequent, then all of its subsets must also be frequent • Apriori principle holds due to the following property of the support measure: • Support of an itemset never exceeds the support of its subsets • This is known as the anti-monotone property of support

  35. Apriori Algorithm • Method: • Let k=1 • Generate frequent itemsets of length 1 • Repeat until no new frequent itemsets are identified • Generate length (k+1) candidate itemsets from length k frequent itemsets • Prune candidate itemsets containing subsets of length k that are infrequent • Count the support of each candidate by scanning the DB • Eliminate candidates that are infrequent, leaving only those that are frequent

  36. Frequent Item Sets • Brute-Force Solution • Apriori Property and Algorithm • Hashing Tree

  37. Reducing Number of Comparisons • Candidate counting: • Scan the database of transactions to determine the support of each candidate itemset • To reduce the number of comparisons, store the candidates in a hash structure • Instead of matching each transaction against every candidate, match it against candidates contained in the hashed buckets

  38. Generate Hash Tree Hash function 3,6,9 1,4,7 2,5,8 2 3 4 5 6 7 3 6 7 3 6 8 1 4 5 3 5 6 3 5 7 6 8 9 3 4 5 1 3 6 1 2 4 4 5 7 1 2 5 4 5 8 1 5 9 • Suppose you have 15 candidate itemsets of length 3: • {1 4 5}, {1 2 4}, {4 5 7}, {1 2 5}, {4 5 8}, {1 5 9}, {1 3 6}, {2 3 4}, {5 6 7}, {3 4 5}, {3 5 6}, {3 5 7}, {6 8 9}, {3 6 7}, {3 6 8} • You need: • Hash function • Max leaf size: max number of itemsets stored in a leaf node (if number of candidate itemsets exceeds max leaf size, split the node)

  39. Maximal Frequent Itemset An itemset is maximal frequent if none of its immediate supersets is frequent Maximal Itemsets Infrequent Itemsets Border

  40. Closed Itemset • An itemset is closed if none of its immediate supersets has the same support as the itemset

  41. Frequent Item Sets • Brute-Force Solution • Apriori Property and Algorithm • Hashing Tree • FP-tree

  42. FP-tree • Scan the database once to store all essential information in a data structure called FP-tree (Frequent Pattern Tree) • The FP-tree is concise and is used in directly generating large itemsets

  43. FP-tree Step 1: Deduce the ordered frequent items. For items with the same frequency, the order is given by the alphabetical order. Step 2: Construct the FP-tree from the above data Step 3: From the FP-tree above, construct the FP-conditional tree for each item (or itemset). Step 4: Determine the frequent patterns.

  44. Frequent Item Sets • Brute-Force Solution • Apriori Property and Algorithm • Hashing Tree • FP-tree • Continuous and Categorical Attributes

  45. Frequent Item Sets • Brute-Force Solution • Apriori Property and Algorithm • Hashing Tree • FP-tree • Continuous and Categorical Attributes • Sequence Pattern Mining

  46. Sequential Pattern Mining: Definition • Given: • a database of sequences • a user-specified minimum support threshold, minsup • Task: • Find all subsequences with support ≥ minsup

  47. Sequential Pattern Mining: Challenge

  48. Sequential Pattern Mining: Example Minsup = 50% Examples of Frequent Subsequences: < {1,2} > s=60% < {2,3} > s=60% < {2,4}> s=80% < {3} {5}> s=80% < {1} {2} > s=80% < {2} {2} > s=60% < {1} {2,3} > s=60% < {2} {2,3} > s=60% < {1,2} {2,3} > s=60%

  49. Generalized Sequential Pattern (GSP) • Step 1: • Make the first pass over the sequence database D to yield all the 1-element frequent sequences • Step 2: Repeat until no new frequent sequences are found • Candidate Generation: • Merge pairs of frequent subsequences found in the (k-1)th pass to generate candidate sequences that contain k items • Candidate Pruning: • Prune candidate k-sequences that contain infrequent (k-1)-subsequences • Support Counting: • Make a new pass over the sequence database D to find the support for these candidate sequences • Candidate Elimination: • Eliminate candidate k-sequences whose actual support is less than minsup

  50. Candidate Generation • Base case (k=2): • Merging two frequent 1-sequences <{i1}> and <{i2}> will produce two candidate 2-sequences: <{i1} {i2}> and <{i1 i2}> • General case (k>2): • A frequent (k-1)-sequence w1 is merged with another frequent (k-1)-sequence w2 to produce a candidate k-sequence if the subsequence obtained by removing the first event in w1 is the same as the subsequence obtained by removing the last event in w2 • The resulting candidate after merging is given by the sequence w1 extended with the last event of w2. • If the last two events in w2 belong to the same element, then the last event in w2 becomes part of the last element in w1 • Otherwise, the last event in w2 becomes a separate element appended to the end of w1

More Related