1 / 54

Clustering

Clustering. Instructor: Qiang Yang Hong Kong University of Science and Technology Qyang@cs.ust.hk Thanks: J.W. Han, I. Witten, E. Frank. Essentials. Terminology: Objects = rows = records Variables = attributes = features A good clustering method

zyta
Download Presentation

Clustering

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Clustering Instructor: Qiang Yang Hong Kong University of Science and Technology Qyang@cs.ust.hk Thanks: J.W. Han, I. Witten, E. Frank

  2. Essentials • Terminology: • Objects = rows = records • Variables = attributes = features • A good clustering method • high on intra-class similarity and low on inter-class similarity • What is similarity? • Based on computation of distance • Between two numerical attributes • Between two nominal attributes • Mixed attributes

  3. The database Object i

  4. Major clustering methods • Partition based (K-means) • Produces sphere-like clusters • Good when • know number of clusters, • Small and med sized databases • Hierarchical methods (Agglomerative or divisive) • Produces trees of clusters • Fast • Density based (DBScan) • Produces arbitrary shaped clusters • Good when dealing with spatial clusters (maps) • Grid-based • Produces clusters based on grids • Fast for large, multidimensional databases • Model-based • Based on statistical models • Allow objects to belong to several clusters

  5. The K-Means Clustering Method: for numerical attributes • Given k, the k-means algorithm is implemented in four steps: • Partition objects into k non-empty subsets • Compute seed points as the centroids of the clusters of the current partition (the centroid is the center, i.e., mean point, of the cluster) • Assign each object to the cluster with the nearest seed point • Go back to Step 2, stop when no more new assignment

  6. The mean point The mean point can be a virtual point

  7. 10 9 8 7 6 5 4 3 2 1 0 0 1 2 3 4 5 6 7 8 9 10 The K-Means Clustering Method • Example 10 9 8 7 6 5 Update the cluster means Assign each objects to most similar center 4 3 2 1 0 0 1 2 3 4 5 6 7 8 9 10 reassign reassign K=2 Arbitrarily choose K object as initial cluster center Update the cluster means

  8. Comments on the K-Means Method • Strength:Relatively efficient: O(tkn), where n is # objects, k is # clusters, and t is # iterations. Normally, k, t << n. • Comment: Often terminates at a local optimum. • Weakness • Applicable only when mean is defined, then what about categorical data? • Need to specify k, the number of clusters, in advance • Unable to handle noisy data and outliers too well • Not suitable to discover clusters with non-convex shapes

  9. Robustness

  10. Variations of the K-Means Method • A few variants of the k-means which differ in • Selection of the initial k means • Dissimilarity calculations • Strategies to calculate cluster means • Handling categorical data: k-modes (Huang’98) • Replacing means of clusters with modes • Using new dissimilarity measures to deal with categorical objects • Using a frequency-based method to update modes of clusters • A mixture of categorical and numerical data: k-prototype method

  11. K-Modes: See J. X. Huang’s paper online (Data Mining and Knowledge Discovery Journal, Springer)

  12. Formalization of K-Means

  13. K-Means: Cont.

  14. K-Modes: See J. X. Huang’s paper online (Data Mining and Knowledge Discovery Journal, Springer)

  15. K-Modes (Cont.)

  16. K-Modes

  17. K-Modes: Cost Function

  18. Finding K-Modes

  19. Mixed Types: K-Prototypes

  20. K-Modes: Evaluation Data

  21. K-Modes: Evaluation

  22. Some Experiments

  23. 10 9 8 7 6 5 4 3 2 1 0 0 1 2 3 4 5 6 7 8 9 10 10 9 8 7 6 5 4 3 2 1 0 0 1 2 3 4 5 6 7 8 9 10 What is the problem of k-Means Method? • The k-means algorithm is sensitive to outliers ! • Since an object with an extremely large value may substantially distort the distribution of the data. • K-Medoids: Instead of taking the mean value of the object in a cluster as a reference point, medoids can be used, which is the most centrally located object in a cluster.

  24. The K-MedoidsClustering Method • Find representative objects, called medoids, in clusters • Medoids are located in the center of the clusters. • Given data points, how to find the medoid? 10 9 8 7 6 5 4 3 2 1 0 0 1 2 3 4 5 6 7 8 9 10

  25. K-Medoids: most centrally located objects

  26. CLARA

  27. CLASA: Simulated Annealing

  28. Sampling based method: MCMRS

  29. KMedoids: Evaluation

  30. Density-Based Clustering Methods • Clustering based on density (local cluster criterion), such as density-connected points • Major features: • Discover clusters of arbitrary shape • Handle noise • One scan • Need density parameters as termination condition • Several interesting studies: • DBSCAN: Ester, et al. (KDD’96) • OPTICS: Ankerst, et al (SIGMOD’99). • DENCLUE: Hinneburg & D. Keim (KDD’98) • CLIQUE: Agrawal, et al. (SIGMOD’98)

  31. Density-Based Clustering • Clustering based on density (local cluster criterion), such as density-connected points • Each cluster has a considerable higher density of points than outside of the cluster

  32. p MinPts = 5 e = 1 cm q Density-Based Clustering: Background • Two parameters: • e: Maximum radius of the neighbourhood • MinPts: Minimum number of points in an Eps-neighbourhood of that point • Ne(p): {q belongs to D | dist(p,q) <= e} • Directly density-reachable: A point p is directly density-reachable from a point q wrt. e, MinPts if • 1) p belongs to Ne(q) • 2) core point condition: |Ne (q)| >= MinPts

  33. p q o Density-Based Clustering: Background (II) • Density-reachable: • A point p is density-reachable from a point q wrt. e, MinPts if there is a chain of points p1, …, pn, p1 = q, pn = p such that pi+1 is directly density-reachable from pi • Density-connected • A point p is density-connected to a point q wrt. e, MinPts if there is a point o such that both, p and q are density-reachable from o wrt. e and MinPts. p p1 q

  34. Outlier Border Eps = 1cm MinPts = 5 Core DBSCAN: Density Based Spatial Clustering of Applications with Noise • Relies on a density-based notion of cluster: A cluster is defined as a maximal set of density-connected points • Discovers clusters of arbitrary shape in spatial databases with noise

  35. DBSCAN: The Algorithm • Arbitrary select a point p • Retrieve all points density-reachable from p wrt e and MinPts. • If p is a core point, a cluster is formed. • If p is a border point, no points are density-reachable from p and DBSCAN visits the next point of the database. • Continue the process until all of the points have been processed.

  36. DBSCAN Properties • Generally takes O(nlogn) time • Still requires user to supply Minpts and e • Advantage • Can find points of arbitrary shape • Requires only a minimal (2) of the parameters

  37. Model-Based Clustering Methods • Attempt to optimize the fit between the data and some mathematical model • Statistical and AI approach • Conceptual clustering • A form of clustering in machine learning • Produces a classification scheme for a set of unlabeled objects • Finds characteristic description for each concept (class) • COBWEB (Fisher’87) • A popular a simple method of incremental conceptual learning • Creates a hierarchical clustering in the form of a classification tree • Each node refers to a concept and contains a probabilistic description of that concept

  38. The COBWEB Conceptual Clustering Algorithm 8.8.1 • The COBWEB algorithm was developed by D. Fisher in the 1990 for clustering objects in a object-attribute data set. • Fisher, Douglas H. (1987) Knowledge Acquisition Via Incremental Conceptual Clustering • The COBWEB algorithm yields a classification tree that characterizes each cluster with a probabilistic description • Probabilistic description of a node: (fish, prob=0.92) • Properties: • incremental clustering algorithm, based on probabilistic categorization trees • The search for a good clustering is guided by a quality measure for partitions of data • COBWEB only supports nominal attributes CLASSIT is the version which works with nominal and numerical attributes

  39. The Classification Tree Generated by the COBWEB Algorithm

  40. Can automatically guess the class attribute That is, after clustering, each cluster more or less corresponds to one of Play=Yes/No category Example: applied to vote data set, can guess correctly the party of a senator based on the past 14 votes! Input: A set of data like before

  41. Clustering: COBWEB • In the beginning tree consists of empty node • Instances are added one by one, and the tree is updated appropriately at each stage • Updating involves finding the right leaf an instance (possibly restructuring the tree) • Updating decisions are based on partition utility and category utility measures

  42. Clustering: COBWEB • The larger this probability, the greater the proportion of class members sharing the value (Vij) and the more predictable the value is of class members.

  43. Clustering: COBWEB • The larger this probability, the fewer the objects that share this value (Vij) and the more predictive the value is of class Ck.

  44. Clustering: COBWEB • The formula is a trade-off between intra-class similarity and inter-class dissimilarity, summed across all classes (k), attributes (i), and values (j).

  45. Clustering: COBWEB

  46. Clustering: COBWEB Increase in the expected number of attribute values that can be correctly guessed (Posterior Probability) The expected number of correct guesses give no such knowledge (Prior Probability)

  47. The Category Utility Function • The COBWEB algorithm operates based on the so-called category utility function (CU) that measures clustering quality. • If we partition a set of objects into m clusters, then the CU of this particular partition is Question: Why divide by m? - hint: if m=#objects, CU is max!

  48. Insights of the CU Function • For a given object in cluster Ck, if we guess its attribute values according to the probabilities of occurring, then the expected number of attribute values that we can correctly guess is

  49. Finite mixtures • Probabilistic clustering algorithms model the data using a mixture of distributions • Each cluster is represented by one distribution • The distribution governs the probabilities of attributes values in the corresponding cluster • They are called finite mixtures because there is only a finite number of clusters being represented • Usually individual distributions are normal distribution • Distributions are combined using cluster weights

  50. A two-class mixture model data A 51A 43B 62B 64A 45A 42A 46A 45A 45 B 62A 47A 52B 64A 51B 65A 48A 49 A 46 B 64A 51A 52B 62A 49A 48B 62A 43A 40 A 48B 64A 51B 63A 43B 65B 66 B 65A 46 A 39B 62B 64A 52B 63B 64A 48B 64A 48 A 51A 48B 64A 42A 48A 41 model A=50, A =5, pA=0.6 B=65, B =2, pB=0.4

More Related