1 / 35

Cluster Analysis for Gene Expression Data

Cluster Analysis for Gene Expression Data. Ka Yee Yeung http://staff.washington.edu/kayee/research.html Center for Expression Arrays Department of Microbiology kayee@u.washington.edu. A gene expression data set. ……. Snapshot of activities in the cell Each chip represents an experiment:

uma-terrell
Download Presentation

Cluster Analysis for Gene Expression Data

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Cluster Analysis for Gene Expression Data Ka Yee Yeung http://staff.washington.edu/kayee/research.html Center for Expression Arrays Department of Microbiology kayee@u.washington.edu

  2. A gene expression data set …….. • Snapshot of activities in the cell • Each chip represents an experiment: • time course • tissue samples (normal/cancer) p experiments n genes Xij Ka Yee Yeung, CEA

  3. What is clustering? • Group similar objects together • Objects in the same cluster (group) are more similar to each other than objects in different clusters • Data exploratory tool: to find patterns in large data sets • Unsupervised approach: do not make use of prior knowledge of data Ka Yee Yeung, CEA

  4. Applications of clustering gene expression data • Cluster the genes  functionally related genes • Cluster the experiments  discover new subtypes of tissue samples • Cluster both genes and experiments  find sub-patterns Ka Yee Yeung, CEA

  5. Examples of clustering algorithms • Hierarchical clustering algorithms eg. [Eisen et al 1998] • K-means eg. [Tavazoie et al. 1999] • Self-organizing maps (SOM) eg. [Tamayo et al. 1999] • CAST [Ben-Dor, Yakhini 1999] • Model-based clustering algorithms eg. [Yeung et al. 2001] Ka Yee Yeung, CEA

  6. Overview • Similarity/distance measures • Hierarchical clustering algorithms • Made popular by Stanford, ie. [Eisen et al. 1998] • K-means • Made popular by many groups, eg. [Tavazoie et al. 1999] • Model-based clustering algorithms[Yeung et al. 2001] Ka Yee Yeung, CEA

  7. How to define similarity? Experiments X genes n 1 p 1 X • Similarity measures: • A measure of pairwise similarity or dissimilarity • Examples: • Correlation coefficient • Euclidean distance genes genes Y Y n n Raw matrix Similarity matrix Ka Yee Yeung, CEA

  8. Similarity measures(for those of you who enjoy equations…) • Euclidean distance • Correlation coefficient Ka Yee Yeung, CEA

  9. Example Correlation (X,Y) = 1 Distance (X,Y) = 4 Correlation (X,Z) = -1 Distance (X,Z) = 2.83 Correlation (X,W) = 1 Distance (X,W) = 1.41 Ka Yee Yeung, CEA

  10. Lessons from the example • Correlation – direction only • Euclidean distance – magnitude & direction • Array data is noisy  need many experiments to robustly estimate pairwise similarity Ka Yee Yeung, CEA

  11. Clustering algorithms • From pairwise similarities to groups • Inputs: • Raw data matrix or similarity matrix • Number of clusters or some other parameters Ka Yee Yeung, CEA

  12. Hierarchical Clustering [Hartigan 1975] • Agglomerative(bottom-up) • Algorithm: • Initialize: each item a cluster • Iterate: • select two most similarclusters • merge them • Halt: when required number of clusters is reached dendrogram Ka Yee Yeung, CEA

  13. Hierarchical: Single Link • cluster similarity = similarity of two most similar members - Potentially long and skinny clusters + Fast Ka Yee Yeung, CEA

  14. Example: single link 5 4 3 2 1 Ka Yee Yeung, CEA

  15. Example: single link 5 4 3 2 1 Ka Yee Yeung, CEA

  16. Example: single link 5 4 3 2 1 Ka Yee Yeung, CEA

  17. Hierarchical: Complete Link • cluster similarity = similarity of two least similar members + tight clusters - slow Ka Yee Yeung, CEA

  18. Example: complete link 5 4 3 2 1 Ka Yee Yeung, CEA

  19. Example: complete link 5 4 3 2 1 Ka Yee Yeung, CEA

  20. Example: complete link 5 4 3 2 1 Ka Yee Yeung, CEA

  21. Hierarchical: Average Link • cluster similarity = average similarity of all pairs + tight clusters - slow Ka Yee Yeung, CEA

  22. Software: TreeView[Eisen et al. 1998] • Fig 1 in Eisen’s PNAS 99 paper • Time course of serum stinulation of primary human fibrolasts • cDNA arrays with approx 8600 spots • Similar to average-link • Free download at: http://rana.lbl.gov/EisenSoftware.htm Ka Yee Yeung, CEA

  23. Overview • Similarity/distance measures • Hierarchical clustering algorithms • Made popular by Stanford, ie. [Eisen et al. 1998] • K-means • Made popular by many groups, eg. [Tavazoie et al. 1999] • Model-based clustering algorithms[Yeung et al. 2001] Ka Yee Yeung, CEA

  24. Partitional: K-Means[MacQueen 1965] 2 1 3 Ka Yee Yeung, CEA

  25. Details of k-means • Iterate until converge: • Assign each data point to the closest centroid • Compute new centroid Objective function: Minimize Ka Yee Yeung, CEA

  26. Properties of k-means • Fast • Proved to converge to local optimum • In practice, converge quickly • Tend to produce spherical, equal-sized clusters • Related to the model-based approach • Gavin Sherlock’s Xcluster: http://genome-www.stanford.edu/~sherlock/cluster.html Ka Yee Yeung, CEA

  27. What we have seen so far.. • Definition of clustering • Pairwise similarity: • Correlation • Euclidean distance • Clustering algorithms: • Hierarchical agglomerative • K-means • Different clustering algorithms  different clusters • Clustering algorithms always spit out clusters Ka Yee Yeung, CEA

  28. Which clustering algorithm should I use? • Good question • No definite answer: on-going research • Our preference: the model-based approach. Ka Yee Yeung, CEA

  29. Model-based clustering (MBC) • Gaussian mixture model: • Assume each cluster is generated by the multivariate normal distribution • Each cluster k has parameters : • Mean vector: mk • Location of cluster k • Covariance matrix: Sk • volume, shape and orientation of cluster k • Data transformations & normality assumption Ka Yee Yeung, CEA

  30. More on the covariance matrix Sk(volume, orientation, shape) Equal volume, spherical (EI) unequal volume, spherical (VI) Equal volume, orientation, shape (EEE) Diagonal model Unconstrained (VVV) Ka Yee Yeung, CEA

  31. Key advantage of the model-based approach: choose the model and the number of clusters • Bayesian Information Criterion (BIC) [Schwarz 1978] • Approximate p(data | model) • A large BIC score indicates strong evidence for the corresponding model. Ka Yee Yeung, CEA

  32. Gene expression data sets • Ovary data [Michel Schummer, Institute of Systems Biology] • Subset of data : 235 clones (portions of genes) 24 experiments (cancer/normal tissue samples) • 235 clones correspond to 4 genes (external criterion) Ka Yee Yeung, CEA

  33. BIC analysis: square root ovary data • EEE and diagonal models -> first local max at 4 clusters • Global max -> VI at 8 clusters Ka Yee Yeung, CEA

  34. How do we know MBC is doing well?Answer: compare to external info • Adjusted Rand: max at EEE 4 clusters (> CAST) Ka Yee Yeung, CEA

  35. Take home messages • MBC has superior performance on: • Quality of clusters • Number of clusters and model chosen (BIC) • Clusters with high BIC scores tend to produce a high agreement with the external information • MBC tends to produce better clusters than a leading heuristic-based clustering algorithm (CAST) • Splus or R versions: http://www.stat.washington.edu/fraley/mclust/ Ka Yee Yeung, CEA

More Related