1 / 37

Clustering (Part II)

Clustering (Part II). 10/07/09. Outline. Affinity propagation Quality evaluation . Affinity propagation: main idea. Data points can be exemplar (cluster center) or non-exemplar (other data points). Message is passed between exemplar (centroid) and non-exemplar data points.

querida
Download Presentation

Clustering (Part II)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Clustering (Part II) 10/07/09

  2. Outline • Affinity propagation • Quality evaluation

  3. Affinity propagation: main idea • Data points can be exemplar (cluster center) or non-exemplar (other data points). • Message is passed between exemplar (centroid) and non-exemplar data points. • The total number of clusters will be automatically found by the algorithm.

  4. Responsibility r(j,k) • A non-exemplar data point informs each candidate exemplar whether it is suitable for joining as a member. candidate exemplar k data point j

  5. Availability a(j,k) • A candidate exemplar data point informs other data points whether it is a good exemplar. candidate exemplar k data point j

  6. Self-availability a(k,k) • A candidate exemplar data point evaluates itself whether it is a good exemplar . candidate exemplar k data point j

  7. An iterative procedure • Update r(j, k) candidate exemplar k r(j,k) a(j,k’) data point j similarity between i and k

  8. An iterative procedure • Update a(j, k) candidate exemplar k r(j’,k) a(j,k) data point j

  9. An iterative procedure • Update a(k, k)

  10. Step-by-step affinity propagation

  11. Applications Multi-exon gene detection in mouse. Expression level at different exons within a gene are corregulated among different tissue types. 37 mouse tissues involved. 12 tiling arrays. (Frey et al. 2005)

  12. “Algorithms for unsupervised classification or cluster analysis abound. Unfortunately however, algorithm development seems to be a preferred activity to algorithm evaluation among methodologists. …… No consensus or clear guidelines exist to guide these decisions. Cluster analysis always produces clustering, but whether a pattern observed in the sample data characterizes a pattern present in the population remains an open question. Resampling-based methods can address this last point, but results indicate that most clusterings in microarray data sets are unlikely to reflect reproducible patterns or patterns in the overall population.” -Allison et al. (2006)

  13. Stability of a cluster Motivation: Real clusters should be reproducible under perturbation: adding noise, omission of data, etc. Procedure: • Perturb observed data by adding noise. • Apply clustering procedure to cluster the perturbed data. • Repeat the above procedures, generate a sample of clusters. • Global test • Cluster-specific tests: R-index, D-index. (McShane et al. 2002)

  14. 5 4 3 2 6 1 5 4 3 2 6 1

  15. Where is the “truth”? “ In the context of unsupervised learning, there is no such direct measure of success. It is difficult to ascertain the validity of inference drawn from the output of most unsupervised learning algorithms. One must often resort to heuristic arguments not only for motivating the algorithm, but also for judgments as to the quality of results. This uncomfortable situation has led to heavy proliferation of proposed methods, since effectiveness is a matter of opinion and cannot be verified directly.” Hastie et al. 2001; ESL

  16. Global test • Null hypothesis: Data come from a multivariate Gaussian distribution. Procedure: • Consider a subspace spanned by top principle components. • Estimate distribution of “nearest neighbor” distances • Compare observed with simulated data.

  17. R-index • If cluster i contains ni objects, then it contains mi = ni*(ni – 1)/2 of pairs. • Let ci be the number of pairs that fall in the same cluster for the re-clustered perturbed data. • ri = ci/mi measures the robustness of the cluster i. • R-index = Si ci / Si mi measures overall stability of a clustering algorithm.

  18. D-index • For each cluster, determine the closest cluster for the perturbed data • Calculated the average discrepancy between the clusters for the original and perturbed data: omission vs addition. • D-index is a summation of all cluster-specific discrepancy.

  19. Applications • 16 prostate cancer; 9 benign tumor • 6500 genes • Use hierarchical clustering to obtain 2,3, and 4 clusters. • Questions: are these clusters reliable?

  20. Issues with calculating R and D indices • How big is the size of perturbation? • How to quantify the significance level? • What about nested consistency?

  21. Biclustering

  22. Motivation conditions 1D-approach: To identify condition cluster, all genes are used. But probably only a few genes are differentially expressed. Gene expression genes

  23. Motivation 1D-approach: To identify gene cluster, all conditions are used. But a set of genes may only be expressed under a few conditions. conditions Gene expression genes

  24. Motivation conditions Bi-clustering Objective: To isolate genes that are co-expressed under a specific set of conditions. Gene expression genes

  25. Coupled Two-Way Clustering • An iterative procedure involving the following two steps. • Within a cluster of conditions, search for gene clusters. • Using features from a cluster of genes, search for condition clusters. (Getz et al. 2001)

  26. SAMBA – A bipartite graph model V = Genes U = Conditions Tanay et al. 2002

  27. SAMBA – A bipartite graph model V = Genes U = Conditions E = “respond” = differential expression Tanay et al. 2002

  28. SAMBA – A bipartite graph model V = Genes U = Conditions Cluster = subgraph (U’, V’, E’) =subset of corregulated genes V’ in conditions U’ E = “respond” = differential expression Tanay et al. 2002

  29. SAMBA -- algorithm H = (U’, V’, E’) Goal: Find the “heaviest” subgraphs. Tanay et al. 2002

  30. SAMBA -- algorithm H = (U’, V’, E’) Goal: Find the “heavy” subgraphs. missing edge Tanay et al. 2002

  31. SAMBA -- algorithm H = (U’, V’, E’) pu,v -- probability of edge expected at random pc – probability of edge within cluster Compute a weight score for H. Tanay et al. 2002

  32. SAMBA -- algorithm H = (U’, V’, E’) Finding the heaviest graph is an NP-hard problem. Use a polynomial algorithm to search for minima efficiently. Tanay et al. 2002

  33. Significance of weight • Let H = (U’, V’, E’) be a subgraph. • Fix U’, random select a new V” with the same size as V’. The weight for the new subgraph (U’, V”, E”) gives a background distribution. • Estimate p-value bp comparing log L(H) with the background distribution.

  34. Model evaluation • The p-value distribution for the top candidate clusters. • If biological classification data are available, evaluate the purity of class membership within each bicluster.

  35. Reading List • Frey and Dueck 2007 • Affinity propagation • McShine et al. 2002 • Clustering model evaluation • Tanay et al. 2002 • SAMBA for biclustering

More Related