1 / 49

Advanced Algorithms and Models for Computational Biology -- a machine learning approach

Advanced Algorithms and Models for Computational Biology -- a machine learning approach. Systems Biology: Inferring gene regulatory network using graphical models Eric Xing Lecture 24, April 17, 2006. Gene regulatory networks. Regulation of expression of genes is crucial:

tadhg
Download Presentation

Advanced Algorithms and Models for Computational Biology -- a machine learning approach

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Advanced Algorithms and Models for Computational Biology-- a machine learning approach Systems Biology: Inferring gene regulatory network using graphical models Eric Xing Lecture 24, April 17, 2006

  2. Gene regulatory networks • Regulation of expression of genes is crucial: • Genes control cell behavior by controlling which proteins are made by a cell and when and where they are delivered • Regulation occurs at many stages: • pre-transcriptional (chromatin structure) • transcription initiation • RNA editing (splicing) and transport • Translation initiation • Post-translation modification • RNA & Protein degradation • Understanding regulatory processes is a central problem of biological research

  3. B E R A C + + Inferring gene regulatory networks • Expression network • gets most attentions so far, many algorithms • still algorithmically challenging • Protein-DNA interaction network • actively pursued recently, some interesting methods available • lots of room for algorithmic advance mostly sometimes rarely

  4. Inferring gene regulatory networks • Network of cis-regulatory pathways • Success stories in sea urchin, fruit fly, etc, from decades of experimental research • Statistical modeling and automated learning just started

  5. 1: Expression networks • Early work • Clustering of expression data • Groups together genes with similar expression pattern • Disadvantage: does not reveal structural relations between genes • Boolean Network • Deterministic models of the logical interactions between genes • Disadvantage:deterministic, static • Deterministic linear models • Disadvantage: under-constrained, capture only linear interactions • The challenge: • Extract biologically meaningful information from the expression data • Discover genetic interactions based on statistical associations among data • Currently dominant methodology • Probabilistic network

  6. Probabilistic Network Approach • Characterize stochastic (non-deterministic!) relationships between expression patterns of different genes • Beyond pair-wise interactions => structure! • Many interactions are explained byintermediate factors • Regulation involvescombined effectsof several gene-products • Flexible in terms of types of interactions (not necessarily linear or Boolean!)

  7. p(X3| X2) AAAAGAGTCA p(X2| X1) X3 X3 X2 X2 p(X1) X1 X6 X1 p(X6| X2, X5) X6 p(X4| X1) p(X5| X4) X4 X5 X4 X5 • The joint distribution on (X1, X2,…, XN) factors according to the “parent-of” relations defined by the edgesE : p(X1, X2, X3, X4, X5, X6) = p(X1) p(X2| X1)p(X3| X2) p(X4| X1)p(X5| X4)p(X6| X2, X5) Probabilistic graphical models • Given a graphG= (V,E), where each nodevÎVis associated with a random variableXv

  8. Probabilistic Inference • Computing statistical queries regarding the network, e.g.: • Is node X independent on node Y given nodes Z,W ? • What is the probability of X=true if (Y=false and Z=true)? • What is the joint distribution of (X,Y) if R=false? • What is the likelihood of some full assignment? • What is the most likely assignment of values to all or a subset the nodes of the network? • General purpose algorithms exist to fully automate such computation • Computational cost depends on the topology of the network • Exact inference: • The junction tree algorithm • Approximate inference; • Loopy belief propagation, variational inference, Monte Carlo sampling

  9. Two types of GMs • Directed edges give causality relationships (Bayesian Network or Directed Graphical Model): • Undirected edges simply give correlations between variables (Markov Random Field or Undirected Graphical model):

  10. Structure: DAG Meaning: a node is conditionally independent of every other node in the network outside its Markov blanket Location conditional distributions (CPD) and the DAG completely determines the joint dist. Bayesian Network Ancestor Parent Y1 Y2 X Child Children's co-parent Children's co-parent Descendent

  11. Common parent Fixing BdecouplesA and C "given the level of gene B, the levels of A and C are independent" Cascade Knowing BdecouplesA and C "given the level of gene B, the level gene A provides no extra prediction value for the level of gene C" V-structure Knowing CcouplesA and B because A can "explain away" B w.r.t. C "If A correlates to C, then chance for B to also correlate to B will decrease" The language is compact, the concepts are rich! B A B C A C A B C Local Structures & Independencies

  12. Why Bayesian Networks? • Sound statistical foundation and intuitive probabilistic semantics • Compact and flexible representation of (in)dependency structure of multivariate distributions and interactions • Natural for modeling global processes with local interactions => good for biology • Natural for statistical confidence analysis of results and answering of queries • Stochastic in nature: models stochastic processes & deals (“sums out”) noise in measurements • General-purpose learning and inference • Capture causal relationships

  13. B A B C A C A B C Possible Biological Interpretation Measured expression level of each gene Random variables (node) Probabilistic dependencies (edge) Gene interaction • Common cause • Intermediate gene • Common/combinatorial effects

  14. More directed probabilistic networks • Dynamic Bayesian Networks (Ong et. al) • Temporal series "static" • Module network (Segal et al.) • Partition variables into modules that share the same parents and the same CPD. • Probabilistic Relational Models (Segal et al.) • Data fusion: integrating related data from multiple sources

  15. Burglary Earthquake E B P(A | E,B) e b 0.9 0.1 e b 0.2 0.8 Radio Alarm e b 0.9 0.1 0.01 0.99 e b Call Bayesian Network – CPDs Local Probabilities: CPD - conditional probability distribution P(Xi|Pai) • Discrete variables: Multinomial Distribution (can represent any kind of statistical dependency)

  16. P(X | Y) Y X Bayesian Network – CPDs (cont.) • Continuous variables: e.g. linear Gaussian

  17. B E R A C E B P(A | E,B) e b 0.9 0.1 e b 0.2 0.8 e b 0.9 0.1 0.01 0.99 e b Learning Bayesian Network • The goal: • Given set of independent samples (assignments of random variables), find the best (the most likely?) Bayesian Network (both DAG and CPDs) (B,E,A,C,R)=(T,F,F,T,F) (B,E,A,C,R)=(T,F,T,T,F) …….. (B,E,A,C,R)=(F,T,T,T,F) B E R A C

  18. A B B C A C Learning Bayesian Network • Learning of best CPDs given DAG is easy • collect statistics of values of each node given specific assignment to its parents • Learning of the graph topology (structure) isNP-hard • heuristic search must be applied, generally leads to a locally optimal network • Overfitting • It turns out, that richer structures give higher likelihood P(D|G) to the data (adding an edge is always preferable) • more parameters to fit => more freedom => always exist more "optimal" CPD(C) • We prefer simpler (more explanatory) networks • Practical scores regularize the likelihood improvement complex networks.

  19. B E R A C Expression data Learning Algorithm BN Learning Algorithms • Structural EM (Friedman 1998) • The original algorithm • Sparse Candidate Algorithm (Friedman et al.) • Discretizing array signals • Hill-climbing search using local operators: add/delete/swap of a single edge • Feature extraction: Markov relations, order relations • Re-assemble high-confidence sub-networks from features • Module network learning (Segal et al.) • Heuristic search of structure in a "module graph" • Module assignment • Parameter sharing • Prior knowledge: possible regulators (TF genes)

  20. B E R A C Confidence Estimates Bootstrap approach: D1 Learn resample D2 E B D Learn resample R A C ... resample E B Dm Estimate “Confidence level”: Learn R A C

  21. Results from SCA + feature extraction (Friedman et al.) The initially learned network of ~800 genes The “mating response” substructure

  22. A Module Network Nature Genetics34, 166 - 176 (2003)

  23. B B A A C C Gaussian Graphical Models • Why? Sometimes an UNDIRECTED association graph makes more sense and/or is more informative • gene expressions may be influenced by unobserved factor that arepost-transcriptionallyregulated • The unavailability of the state of B results in a constrain over A and C B A C

  24. Covariance Selection • Multivariate Gaussian over all continuous expressions • The precision matrixK=S-1 reveals the topology of the (undirected) network • Edge ~ |Kij| > 0 • Learning Algorithm: Covariance selection • Want a sparse matrix • Regression for each node with degree constraint (Dobra et al.) • Regression for each node with hierarchical Bayesian prior (Li, et al)

  25. Gene modules from identified using GGM (Li, Yang and Xing) A comparison of BN and GGM: GGM BN

  26. B B A A C C 2: Protein-DNA Interaction Network • Expression networks are not necessarily causal • BNs are indefinable only up to Markov equivalence: can give the same optimal score, but not further distinguishable under a likelihood score unless further experiment from perturbation is performed • GGM have yields functional modules, but no causal semantics • TF-motif interactions provide direct evidence of casual, regulatory dependencies among genes • stronger evidence than expression correlations • indicating presence of binding sites on target gene -- more easily verifiable • disadvantage: often very noisy, only applies to cell-cultures, restricted to known TFs ... and

  27. ChIP-chip analysis Simon I et al (Young RA) Cell 2001(106):697-708 Ren B et al (Young RA) Science 2000 (290):2306-2309 Advantages: - Identifies “all” the sites where a TF binds “in vivo” under the experimental condition. Limitations: - Expense: Only 1 TF per experiment - Feasibility: need an antibody for the TF. - Prior knowledge: need to know what TF to test.

  28. Transcription network [Horak, et al, Genes & Development, 16:3017-3033] Often reduced to a clustering problem • A transcriptional module can be defined as a set of genes that are: • co-bound (bound by the same set of TFs, up to limits of experimental noise) and • co-expressed (has the same expression pattern, up to limits of experimental noise). • Genes in the same module are likely to be co-regulated, and hence likely have a common biological function.

  29. Array/motif/chip data Learning Algorithm Inferring transcription network • Clustering + Post-processing • SAMBA (Tanay et al.) • GRAM (Genetic RegulAtory Modules) (Bar-Joseph et al.) • Bayesian network • Joint clustering model for mRNA signal, motif, and binding score (Segal, et al.) • Using binding data to define Informative structural prior (Bernard & Hartemink) (the only reported work that directly learn the network)

  30. no edge G=(U,V,E) The SAMBA model edge conditions genes The GE (gene-condition) matrix Goal : Find high similarity submatrices Goal : Find dense subgraphs

  31. Bicluster Random Subgraph Model Likelihood model translates to sum of weights over edges and non edges = Background Random Graph Model The SAMBA approach • Normalization: translate gene/experiment matrix to a weighted bipartite graph using a statistical model for the data • Bicluster model: Heavy subgraphs • Combined hashing and local optimization (Tanay et al.): incrementally finds heavy-weight biclusters that can partially overlap • Other algorithms are also applicable, e.g., spectral graph cut, SDP: but they mostly finds disjoint clusters

  32. Strong complex binding to protein P p2 Medium complex binding to Protein P p1 p1 p2 p3 p4 Strong Induction Medium Induction Medium Repression Strong Repression p1 p2 p1 p2 p1 p2 Medium Confidence Interaction Strong Binding to TF T High Sensitivity High Confidence Interaction Medium Binding to TF T Medium Sensitivity Date pre-processing From experiments to properties: gene g

  33. Properties Genes GO annotations A SAMBA module CPA1 CPA2

  34. Post-clustering analysis • Using topological properties to identify genes connecting several processes. • Specialized groups of genes (transport, signaling) are more likely to bridge modules. • Inferring the transcription network • Genes in modules with expression properties should be driven by at least one common TF • There exist gaps between information sources (condition specific TF binding?) • Enriched binding motifs help identifying “missing TF profiles” Ribosom Biogenesys CGATGAG AP-1 RNA Processing AGGGG STRE AAAATTTT Stress Fhl1 Protein Biosynthesis Rap1

  35. Group genes bound by (nearly) the same set of TFs Cluster genes that co-express Statistically significant union The GRAM model

  36. Exhaustively search all subsets of TFs (starting with the largest sets); find genes bound by subset F with significance p:G(F,p) Find an r-tight gene cluster with centroid c*:B(c*,r)|, s.t. c* = argmaxc |G(F,p) ∩ B(c,r)| Some local massage to further improve cluster significance The GRAM algorithm

  37. Regulatory Interactions revealed by GRAM

  38. TF Binding sites regulation indicator experimental conditions p-value for localization assay expression level The Full Bayesian Network Approach

  39. The Full BN Approach, cond. • Introduce random variables for each observations, and latent states/events to be modeled • A network with hidden variables: • regulation indicator and binding sites not observed • Results in a very complex model that integrates motif search, gene activation/repression function, mRNA level distribution, ... • Advantage: in principle allow heterogeneous information to flow and cross influence to produce a globally optimal and consistent solution • Disadvantage: in practice, a very difficult statistical inference and learning problem. Lots of heuristic tricks are needed, computational solutions are usually not a globally optimum.

  40. B B E E R R A A C C p(Q ) • Bayes’ rule: prior likelihood posterior • Bayesian estimation: A Bayesian Learning Approach For model p(A|Q ): • Treat parameters/model Q as unobserved random variable(s) • probabilistic statements of Q is conditioned on the values of the observed variables Aobs

  41. Informative Structure Prior (Bernard & Hartemink) • If we observe TF i binds Gene j from location assay, then the probability of have an edge between node i and j in a expression network is increased • This leads to a prior probability of an edge being present: • This is like adding a regularizer (a compensating score) to the originally likelihood score of an expression network • The regularized score is now ...

  42. Informative Structure Prior (Bernard & Hartemink) • The regularized score of an expression network under an informative structure prior is: Score = P(array profile|Chip profile) = logP(array profile|S) + logP(S) • Now we run the usual structural learning algorithm as in expression network, but now optimizing logP(array profile|S) + logP(S), instead logP(array profile|S)

  43. We've found a network…now what? • How can we validate our results? • Literature. • Curated databases (e.g., GO/MIPS/TRANSFAC). • Other high throughput data sources. • “Randomized” versions of data. • New experiments. • Active learning: proposing the "most informative" new experiments

  44. What is next? • We are still far away from being able to correctly infer cis-regulatory network in silico, even in a simple multi-cellular organism • Open computational problems: • Causality inference in network • Algorithms for identifying TF bindings motifs and motif clusters directly from genomic sequences in higher organisms • Techniques for analyzing temporal-spatial patterns of gene expression (i.e., whole embryo in situ hybridization pattern) • Probabilistic inference and learning algorithms that handle large networks • Experimental limitation: • Binding location assay can not yet be performed inexpensively in vivo in multi-cellular organisms • Measuring the temporal-spatial patterns of gene expression in multi-cellular organisms is not yet high-throughput • Perturbation experiment still low-throughput

  45. More "Direct" Approaches • Principle: • using "direct" structural/sequence/biochemical level evidences of network connectivity, rather than "indirect" statistical correlations of network products, to recover the network topology. • Algorithmic approach • Supervised and de novo motif search • Protein function prediction • protein-protein interactions • Experimental approach • In principle feasible via systematic mutational perturbations • In practice very tedious • The ultimate ground truth Networks of cis-regulatory pathway This strategy has been a hot trend, but faces many algorithmic challenges

  46. A thirty-year network for a single gene The Endo16 Model (Eric Davidson, with tens of students/postdocs over years)

  47. Acknowledgments Sources of some of the slides: Mark Gerstein, Yale Nir Friedman, Hebrew U Ron Shamir, TAU Irene Ong, Wisconsin Georg Gerber, MIT

  48. References Using Bayesian Networks to Analyze Expression Data N. Friedman, M. Linial, I. Nachman, D. Pe'er. In Proc. Fourth Annual Inter. Conf. on Computational Molecular Biology RECOMB, 2000. Genome-wide Discovery of Transcriptional Modules from DNA Sequence and Gene Expression, E. Segal, R. Yelensky, and D. Koller. Bioinformatics, 19(Suppl 1): 273--82, 2003. A Gene-Coexpression Network for Global Discovery of Conserved Genetic Modules Joshua M. Stuart, Eran Segal, Daphne Koller, Stuart K. Kim, Science, 302 (5643):249-55, 2003. Module networks: identifying regulatory modules and their condition-specific regulators from gene expression data.Segal E, Shapira M, Regev A, Pe'er D, Botstein D, Koller D, Friedman N. Nature Genetics, 34(2): 166-76, 2003. Learning Module Networks E. Segal, N. FriedmanD. Pe'er, A. Regev, and D. Koller.In Proc. Nineteenth Conf. on Uncertainty in Artificial Intelligence (UAI), 2003From Promoter Sequence to Expression: A Probabilistic Framework, E. Segal, Y. Barash, I. Simon, N. Friedman, and D. Koller. In Proc. 6th Inter. Conf. on Research in Computational Molecular Biology (RECOMB), 2002 Modelling Regulatory Pathways in E.coli from Time Series Expression Profiles, Ong, I. M., J. D. Glasner and D. Page, Bioinformatics, 18:241S-248S, 2002.

  49. References Sparse graphical models for exploring gene expression data, Adrian Dobra, Beatrix Jones, Chris Hans, Joseph R. Nevins and Mike West, J. Mult. Analysis, 90, 196-212, 2004. Experiments in stochastic computation for high-dimensional graphical models, Beatrix Jones, Adrian Dobra, Carlos Carvalho, Chris Hans, Chris Carter and Mike West, Statistical Science, 2005. Inferring regulatory network using a hierarchical Bayesian graphical gaussian model, Fan Li, Yiming Yang, Eric Xing (working paper) 2005. Computational discovery of gene modules and regulatory networks. Z. Bar-Joseph*, G. Gerber*, T. Lee*, N. Rinaldi, J. Yoo, F. Robert, B. Gordon, E. Fraenkel, T. Jaakkola, R. Young, and D. Gifford. Nature Biotechnology, 21(11) pp. 1337-42, 2003. Revealing modularity and organization in the yeast molecular network by integrated analysis of highly heterogeneous genomewide data, A. Tanay, R. Sharan, M. Kupiec, R. Shamir Proc. National Academy of Science USA 101 (9) 2981--2986, 2004. Informative Structure Priors: Joint Learning of Dynamic Regulatory Networks from Multiple Types of Data, Bernard, A. & Hartemink, A. In Pacific Symposium on Biocomputing 2005 (PSB05),

More Related