1 / 61

Bioinformatics: gene expression basics

Bioinformatics: gene expression basics. Ollie Rando, LRB 903. Experimental Cycle. Biological question ( hypothesis-driven or explorative). To call in the statistician after the experiment is done may be no more than asking him to perform a post-mortem examination:

manuela
Download Presentation

Bioinformatics: gene expression basics

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Bioinformatics: gene expression basics Ollie Rando, LRB 903

  2. Experimental Cycle Biological question (hypothesis-driven or explorative) To call in the statistician after the experiment is done may be no more than asking him to perform a post-mortem examination: He may be able to say what the experiment died of. Ronald Fisher Experimental design Failed Microarray experiment Image analysis Quality Measurement Pre-processing Normalization Pass Analysis Clustering Discrimination Estimation Testing Biological verification and interpretation

  3. DNA Microarray

  4. From experiment to data

  5. Microarrays & Spot Colour

  6. Brain 67,679 Lung 20,224 Heart 9,400 Liver 37,807 Colon 4,832 Prostate 7,971 Bone 4,832 Skin 3,043 Microarray Analysis Examples Brain Lung Liver Liver Tumor

  7. tissue contamination RNA degradation amplification efficiency reverse transcription efficiency Hybridization efficiency and specificity clone identification and mapping PCR yield, contamination Raw data are not mRNA concentrations • spotting efficiency • DNA support binding • other array manufacturing related issues • image segmentation • signal quantification • “background” correction

  8. Scatterplot Data Data (log scale) Message: look at your data on log-scale!

  9. M = log2(R/G) A = 1/2 log2(RG) MA Plot

  10. Median centering One of the simplest strategies is to bring all „centers“ of the array data to the same level. Assumption: the majority of genes are un-changed between conditions. Median is more robust to outliers than the mean. Divide all expression measurements of each array by the Median. Log Signal, centered at 0

  11. Scatterplot of log-Signals after Median-centering M-A Plot of the same data Log Red M = Log Red -Log Green Log Green A = (Log Green + Log Red) / 2 Problem of median-centering Median-Centering is a global Method. It does not adjust for local effects, intensity dependent effects, print-tip effects, etc.

  12. M = Log Red -Log Green A = (Log Green + Log Red) / 2 Lowess normalization Local estimate Use the estimate to bend the banana straight

  13. Summary I • Raw data are not mRNA concentrations • We need to check data quality on different levels • Probe level • Array level (all probes on one array) • Gene level (one gene on many arrays) • Always log your data • Normalize your data to avoid systematic (non-biological) effects • Lowess normalization straightens banana

  14. OK, so I’ve got a gene list with expression changes: now what? “Huh. Turns out the standard names for the most upregulated genes all start with ‘HSP’, or ‘GAL’ … I wonder if that’s real …”

  15. Gene Ontology • Organization of curated biological knowledge • 3 branches: biological process, molecular function, cellular component

  16. Hypergeometric Distribution • Probability of observing x or more genes in a cluster of n genes with a common annotation • N = total number of genes in genome • M = number of genes with annotation • n = number of genes in cluster • x = number of genes in cluster with annotation • Multiple hypothesis correction required if testing multiple functions (Bonferroni, FDR, etc.) • Additional genes in clusters with strong enrichment may be related

  17. Kolmogorov-Smirnov test • Hypergeometric test requires “hard calls” – this list of 278 genes is my upregulated set • But say all 250 genes involved in oxygen consumption go up ~10-20% each – this would not likely show up • KS test asks whether *distribution* for a given geneset (GO category, etc.) deviates from your dataset’s background, and is nonparametric • Cumulative Distribution Function (CDF) plot: • Gene Set Enrichment Analysis: • http://www.broadinstitute.org/gsea/

  18. GO term Enrichment Tools • SGD’s & Princeton’s GoTermFinder • http://go.princeton.edu • GOLEM (http://function.princeton.edu/GOLEM) • HIDRA Sealfon et al., 2006

  19. Supervised analysis = learning from examples, classification • We have already seen groups of healthy and sick people. Now let’s diagnose the next person walking into the hospital. • We know that these genes have function X (and these others don’t). Let’s find more genes with function X. • We know many gene-pairs that are functionally related (and many more that are not). Let’s extend the number of known related gene pairs. Known structure in the data needs to be generalized to new data.

  20. Un-supervised analysis = clustering • Are there groups of genes that behave similarly in all conditions? • Disease X is very heterogeneous. Can we identify more specific sub-classes for more targeted treatment? No structure is known. We first need to find it. Exploratory analysis.

  21. Supervised analysis Calvin, I still don’t know the difference between cats and dogs … Oh, now I get it!! Don’t worry! I’ll show you once more: Class 1: cats Class 2: dogs

  22. Un-supervised analysis Calvin, I still don’t know the difference between cats and dogs … I don’t know it either. Let’s try to figure it out together …

  23. Supervised analysis: setup • Training set • Data: microarrays • Labels: for each one we know if it falls into our class of interest or not (binary classification) • New data (test data) • Data for which we don’t have labels. • Eg. Genes without known function • Goal: Generalization ability • Build a classifier from the training data that is good at predicting the right class for the new data.

  24. One microarray, one dot Think of a space with #genes dimensions (yes, it’s hard for more than 3). Each microarray corresponds to a point in this space. If gene expression is similar under some conditions, the points will be close to each other. If gene expression overall is very different, the points will be far away. Expression of gene 2 Expression of gene 1

  25. Which line separates best? A B D C

  26. No sharp knive, but a … FAT PLANE

  27. Support Vector Machines Maximal margin separating hyperplane Datapoints closest to separating hyperplane = support vectors

  28. How well did we do? Training error: how well do we do on the data we trained the classifier on? But how well will we do in the future, on new data? Test error: How well does the classifier generalize? Same classifier (= line) New data from same classes The classifier will usually perform worse than before: Test error > training error

  29. Cross-validation Training error Train classifier and test it Test error Train Test K-fold Cross-validation Train Train Test Step 1. Here for K=3 Train Test Train Step 2. Test Train Train Step 3.

  30. Additional supervised approaches might depend on your goal: cell cycle analysis

  31. Clustering • Let the data organize itself • Reordering of genes (or conditions) in the dataset so that similar patterns are next to each other (or in separate groups) • Identify subsets of genes (or experiments) that are related by some measure

  32. Quick Example Conditions Genes

  33. Why cluster? • “Guilt by association” – if unknown gene X is similar in expression to known genes A and B, maybe they are involved in the same/related pathway • Visualization: datasets are too large to be able to get information out without reorganizing the data

  34. Clustering Techniques • Algorithm (Method) • Hierarchical • K-means • Self Organizing Maps • QT-Clustering • NNN • . • . • . • Distance Metric • Euclidean (L2) • Pearson Correlation • Spearman Correlation • Manhattan (L1) • Kendall’s t • . • . • .

  35. Distance Metrics • Choice of distance measure is important for most clustering techniques • Pair-wise metrics – compare vectors of numbers • e.g. genes x & y, ea. with n measurements Euclidean Distance Pearson Correlation Spearman Correlation

  36. Distance Metrics Euclidean Distance Pearson Correlation Spearman Correlation

  37. Hierarchical clustering • Imposes (pair-wise) hierarchical structure on all of the data • Often good for visualization • Basic Method (agglomerative): • Calculate all pair-wise distances • Join the closest pair • Calculate pair’s distance to all others • Repeat from 2 until all joined

  38. Hierarchical clustering

  39. Hierarchical clustering

  40. Hierarchical clustering

  41. Hierarchical clustering

  42. Hierarchical clustering

  43. Hierarchical clustering

  44. HC – Interior Distances • Three typical variants to calculate interior distances within the tree • Average linkage: mean/median over all possible pair-wise values • Single linkage: minimum pair-wise distance • Complete linkage: maximum pair-wise distance

  45. Hierarchical clustering: problems • Hard to define distinct clusters • Genes assigned to clusters on the basis of all experiments • Optimizing node ordering hard (finding the optimal solution is NP-hard) • Can be driven by one strong cluster – a problem for gene expression b/c data in row space is often highly correlated

  46. Cluster analysis of combined yeast data sets Eisen M B et al. PNAS 1998;95:14863-14868 ©1998 by The National Academy of Sciences

  47. To demonstrate the biological origins of patterns seen in Figs. 1 and 2, data from Fig. 1 were clustered by using methods described here before and after random permutation within rows (random 1), within columns (random 2), and both (random 3). Eisen M B et al. PNAS 1998;95:14863-14868 ©1998 by The National Academy of Sciences

  48. Hierarchical Clustering: Another Example • Expression of tumors hierarchically clustered • Expression groups by clinical class Garber et al.

  49. K-means Clustering • Groups genes into a pre-defined number of independent clusters • Basic algorithm: • Define k = number of clusters • Randomly initialize each cluster with a seed (often with a random gene) • Assign each gene to the cluster with the most similar seed • Recalculate all cluster seeds as means (or medians) of genes assigned to the cluster • Repeat 3 & 4 until convergence (e.g. No genes move, means don’t change much, etc.)

  50. K-means example

More Related