1 / 33

Clustering (2)

Clustering (2). Hierarchical Clustering. Produces a set of nested clusters organized as a hierarchical tree Can be visualized as a dendrogram A tree like diagram that records the sequences of merges or splits. Strengths of Hierarchical Clustering.

mira-vang
Download Presentation

Clustering (2)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Clustering (2)

  2. Hierarchical Clustering • Produces a set of nested clusters organized as a hierarchical tree • Can be visualized as a dendrogram • A tree like diagram that records the sequences of merges or splits

  3. Strengths of Hierarchical Clustering • Do not have to assume any particular number of clusters • ‘cut’ the dendogram at the proper level • They may correspond to meaningful taxonomies • Example in biological sciences e.g., • animal kingdom, • phylogeny reconstruction, • …

  4. Hierarchical Clustering • Start with the points as individual clusters • At each step, merge the closest pair of clusters until only one cluster left. Algorithm Let each data point be a cluster Compute the proximity matrix Repeat Merge the two closest clusters Update the proximity matrix Until only a single cluster remains • Key operation is the computation of the proximity of two clusters.

  5. p1 p2 p3 p4 p5 . . . p1 p2 p3 p4 p5 . . . Starting Situation • Start with clusters of individual points and a proximity matrix Proximity Matrix

  6. C1 C2 C3 C4 C5 C1 C2 C3 C4 C5 Intermediate Situation • After some merging steps, we have some clusters C3 C4 Proximity Matrix C1 C5 C2

  7. C1 C2 C3 C4 C5 C1 C2 C3 C4 C5 Intermediate Situation • We want to merge the two closest clusters (C2 and C5) and update the proximity matrix. C3 C4 Proximity Matrix C1 C5 C2

  8. After Merging • The question is “How do we update the proximity matrix?” C2 U C5 C1 C3 C4 C1 ? ? ? ? ? C2 U C5 C3 C3 ? C4 ? C4 Proximity Matrix C1 C2 U C5

  9. p1 p2 p3 p4 p5 . . . p1 p2 p3 p4 p5 . . . How to Define Inter-Cluster Similarity Distance? • MIN • MAX • Group Average Proximity Matrix

  10. p1 p2 p3 p4 p5 . . . p1 p2 p3 p4 p5 . . . How to Define Inter-Cluster Similarity • MIN • MAX • Group Average Proximity Matrix

  11. p1 p2 p3 p4 p5 . . . p1 p2 p3 p4 p5 . . . How to Define Inter-Cluster Similarity • MIN • MAX • Group Average Proximity Matrix

  12. p1 p2 p3 p4 p5 . . . p1 p2 p3 p4 p5 . . . How to Define Inter-Cluster Similarity • MIN • MAX • Group Average Proximity Matrix

  13. Cluster Similarity: MIN • Similarity of two clusters is based on the two most similar (closest) points in the different clusters • Determined by one pair of points

  14. 5 1 3 5 2 1 2 3 6 4 4 Hierarchical Clustering: MIN Nested Clusters Dendrogram

  15. Two Clusters Strength of MIN Original Points Can handle non-globular shapes

  16. Original Points Four clusters Three clusters: The yellow points got wrongly merged with the red ones, as opposed to the green one. Limitations of MIN Sensitive to noise and outliers

  17. Cluster Similarity: MAX • Similarity of two clusters is based on the two least similar (most distant) points in the different clusters • Determined by all pairs of points in the two clusters

  18. 4 1 2 5 5 2 3 6 3 1 4 Hierarchical Clustering: MAX Nested Clusters Dendrogram

  19. Original Points Four clusters Three clusters: The yellow points get now merged with the green one. Strengths of MAX Less susceptible respect to noise and outliers

  20. Two Clusters Limitations of MAX Original Points Tends to break large clusters

  21. Cluster Similarity: Group Average • Proximity of two clusters is the average of pairwise proximity between points in the two clusters.

  22. 5 4 1 2 5 2 3 6 1 4 3 Hierarchical Clustering: Group Average Nested Clusters Dendrogram

  23. Hierarchical Clustering: Time and Space • O(N2) space since it uses the proximity matrix. • N is the number of points. • O(N3) time in many cases • There are N steps and at each step the size, N2, proximity matrix must be updated and searched • Complexity can be reduced to O(N2 log(N) ) time for some approaches

  24. Hierarchical Clustering Example

  25. Hierarchical Clustering Example From “Indo-European languages tree by Levenshtein distance” by M. Serva1 and F. Petroni

  26. DBSCAN DBSCAN is a density-based algorithm. Locates regions of high density that are separated from one another by regions of low density. • Density = number of points within a specified radius (Eps) • A point is a core point if it has more than a specified number of points (MinPts) within Eps • These are points that are at the interior of a cluster • A border point has fewer than MinPts within Eps, but is in the neighborhood of a core point • A noise point is any point that is neither a core point nor a border point.

  27. DBSCAN: Core, Border, and Noise Points

  28. DBSCAN Algorithm • Any two core points that are close enough---within a distance Eps of one another---are put in the same cluster. • Likewise, any border point that is close enough to a core point is put in the same cluster as the core point. • Ties may need to be resolved if a border point is close to core points from different clusters. • Noise points are discarded.

  29. DBSCAN: Core, Border and Noise Points Original Points Point types: core, border and noise Eps = 10, MinPts = 4

  30. Clusters When DBSCAN Works Well Original Points • Resistant to Noise • Can handle clusters of different shapes and sizes

  31. When DBSCAN Does NOT Work Well Why DBSCAN doesn’t work well here?

  32. DBSCAN: Determining EPS and MinPts • Look at the behavior of the distance from a point to its k-th nearest neighbor, called the k­dist. • For points that belong to some cluster, the value of k­dist will be small [if k is not larger than the cluster size]. • However, for points that are not in a cluster, such as noise points, the k­dist will be relatively large. • So, if we compute the k­dist for all the data points for some k, sort them in increasing order, and then plot the sorted values, we expect to see a sharp change at the value of k­dist that corresponds to a suitable value of Eps. • If we select this distance as the Eps parameter and take the value of k as the MinPts parameter, then points for which k­dist is less than Eps will be labeled as core points, while other points will be labeled as noise or border points.

  33. DBSCAN: Determining EPS and MinPts • Eps determined in this way depends on k, but does not change dramatically as k changes. • If k is too small ? then even a small number of closely spaced points that are noise or outliers will be incorrectly labeled as clusters. • If k is too large ? then small clusters (of size less than k) are likely to be labeled as noise. • Original DBSCAN used k = 4, which appears to be a reasonable value for most data sets.

More Related