1 / 24

Clustering

Explore the comparison between hierarchical and K-means clustering, and learn about cluster validation metrics such as Dunn's index, Davies-Bouldin's index, and silhouette index.

voverby
Download Presentation

Clustering

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Clustering Patrice Koehl Department of Biological Sciences National University of Singapore http://www.cs.ucdavis.edu/~koehl/Teaching/BL5229 dbskoehl@nus.edu.sg

  2. Clustering is a hard problem Many possibilities; What is best clustering ?

  3. Clustering is a hard problem 2 clusters: easy

  4. Clustering is a hard problem 4 clusters: difficult ? ? Many possibilities; What is best clustering ?

  5. Clustering • Hierarchical clustering • K-means clustering • How many clusters?

  6. Clustering • Hierarchical clustering • K-means clustering • How many clusters?

  7. Hierarchical Clustering To cluster a set of data D={P1, P2, …,PN}, hierarchical clustering proceeds through a series of partitions that runs from a single cluster containing all data points, to N clusters, each containing 1 data points. Two forms of hierarchical clustering: Agglomerative a, b, c, d, e c, d, e a, b d, e Divisive a b c d e

  8. Starts with N independent clusters: {P1}, {P2}, …,{PN} Find the two closest (most similar) clusters, and join them Repeat step 2 until all points belong to the same cluster Agglomerative hierarchical clustering techniques Methods differ in their definition of inter-cluster distance (or similarity)

  9. Agglomerative hierarchical clustering techniques Cluster A 1) Single linkage clustering Distance between closest pairs of points: Cluster B Cluster A 2) Complete linkage clustering Distance between farthest pairs of points: Cluster B

  10. Agglomerative hierarchical clustering techniques Cluster A: NA elements 3) Average linkage clustering Mean distance of all mixed pairs of points: Cluster B: NB elements 4) Average group linkage clustering Cluster A Cluster T Mean distance of all pairs of points Cluster B

  11. Clustering • Hierarchical clustering • K-means clustering • How many clusters?

  12. K-means clustering (http://www.weizmann.ac.il/midrasha/courses/)

  13. K-means clustering (http://www.weizmann.ac.il/midrasha/courses/)

  14. K-means clustering (http://www.weizmann.ac.il/midrasha/courses/)

  15. K-means clustering (http://www.weizmann.ac.il/midrasha/courses/)

  16. K-means clustering (http://www.weizmann.ac.il/midrasha/courses/)

  17. Clustering • Hierarchical clustering • K-means clustering • How many clusters?

  18. Cluster validation • Clustering is hard: it is an unsupervised learning technique. Once a • Clustering has been obtained, it is important to assess its validity! • The questions to answer: • Did we choose the right number of clusters? • Are the clusters compact? • Are the clusters well separated? • To answer these questions, we need a quantitative measure • of the cluster sizes: • intra-cluster size • Inter-cluster distances

  19. Inter cluster size • Several options: • Single linkage • Complete linkage • Average linkage • Average group linkage

  20. Intra cluster size • Several options: • Complete diameter: • Average diameter: • Centroid diameter: For a cluster S, with N members and center C:

  21. Cluster Quality For a clustering with K clusters: 1) Dunn’s index Large values of D correspond to good clusters 2) Davies-Bouldin’s index Low values of DB correspond to good clusters

  22. Cluster Quality: Silhouette index • Define a quality index for each point in the original dataset: • For the ith object, calculate its average distance to all other • objects in its cluster. Call this value ai. • For the ith object and any cluster not containing the object, • calculate the object’s average distance to all the objects in the • given cluster. • Find the minimum such value with respect to all clusters; • call this value bi. • For the ith object, the silhouette coefficient is

  23. Cluster Quality: Silhouette index Note that: • s(i) = 1, i is likely to be well classified • s(i) = -1, i is likely to be incorrectly classified • s(i) = 0, indifferent

  24. Cluster Quality: Silhouette index Cluster silhouette index: Global silhouette index: Large values of GS correspond to good clusters

More Related