1 / 52

BCB 444/544

BCB 444/544. Lecture 32 Machine Learning #32_Nov07. Required Reading ( before lecture). Fri Oct 30 - Lecture 30 Phylogenetic – Distance-Based Methods Chp 11 - pp 142 – 169 Mon Nov 5 - Lecture 31 Phylogenetics – Parsimony and ML Chp 11 - pp 142 – 169

nelson
Download Presentation

BCB 444/544

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. BCB 444/544 Lecture 32 Machine Learning #32_Nov07 BCB 444/544 F07 ISU Terribilini #32- Machine Learning

  2. Required Reading (before lecture) Fri Oct 30 - Lecture 30 Phylogenetic – Distance-Based Methods • Chp 11 - pp 142 – 169 Mon Nov 5 - Lecture 31 Phylogenetics – Parsimony and ML • Chp 11 - pp 142 – 169 Wed Nov 7 - Lecture 32 Machine Learning Fri Nov 9 - Lecture 33 Functional and Comparative Genomics • Chp 17 and Chp 18 BCB 444/544 F07 ISU Terribilini #32- Machine Learning

  3. BCB 544 Only: New Homework Assignment 544 Extra#2 Due: √PART 1 - ASAP PART 2 - meeting prior to 5 PM Fri Nov 2 Part 1 - Brief outline of Project, email to Drena & Michael after response/approval, then: Part 2 - More detailed outline of project Read a few papers and summarize status of problem Schedule meeting with Drena & Michael to discuss ideas BCB 444/544 F07 ISU Terribilini #32- Machine Learning

  4. Seminars this Week BCB List of URLs for Seminars related to Bioinformatics: http://www.bcb.iastate.edu/seminars/index.html • Nov 7 Wed - BBMB Seminar 4:10 in 1414 MBB • Sharon Roth Dent MD Anderson Cancer Center • Role of chromatin and chromatin modifying proteins in regulating gene expression • Nov 8 Thurs - BBMB Seminar 4:10 in 1414 MBB • Jianzhi George Zhang U. Michigan • Evolution of new functions for proteins • Nov 9 Fri - BCB Faculty Seminar2:10 in 102 SciI • Amy AndreottiISU • Something about NMR BCB 444/544 F07 ISU Terribilini #32- Machine Learning

  5. Chp 11 – Phylogenetic Tree Construction Methods and Programs SECTION IV MOLECULAR PHYLOGENETICS Xiong: Chp 11 Phylogenetic Tree Construction Methods and Programs • Distance-Based Methods • Character-Based Methods • Phylogenetic Tree Evaluation • Phylogenetic Programs BCB 444/544 F07 ISU Terribilini #32- Machine Learning

  6. Phylogenetic Tree Evaluation • Bootstrapping • Jackknifing • Bayesian Simulation • Statistical difference tests (are two trees significantly different?) • Kishino-Hasegawa Test (paired t-test) • Shimodaira-Hasegawa Test (χ2 test) BCB 444/544 F07 ISU Terribilini #32- Machine Learning

  7. Bootstrapping • A bootstrap sample is obtained by sampling sites randomly with replacement • Obtain a data matrix with same number of taxa and number of characters as original one • Construct trees for samples • For each branch in original tree, compute fraction of bootstrap samples in which that branch appears • Assigns a bootstrap support value to each branch • Idea: If a grouping has a lot of support, it will be supported by at least some positions in most of the bootstrap samples BCB 444/544 F07 ISU Terribilini #32- Machine Learning

  8. Bootstrapping Comments • Bootstrapping doesn’t really assess the accuracy of a tree, only indicates the consistency of the data • To get reliable statistics, bootstrapping needs to be done on your tree 500 – 1000 times, this is a big problem if your tree took a few days to construct BCB 444/544 F07 ISU Terribilini #32- Machine Learning

  9. Jackknifing • Another resampling technique • Randomly delete half of the sites in the dataset • Construct new tree with this smaller dataset, see how often taxa are grouped • Advantage – sites aren’t duplicated • Disadvantage – again really only measuring consistency of the data BCB 444/544 F07 ISU Terribilini #32- Machine Learning

  10. Bayesian Simulation • Using a Bayesian ML method to produce a tree automatically calculates the probability of many trees during the search • Most trees sampled in the Bayesian ML search are near an optimal tree BCB 444/544 F07 ISU Terribilini #32- Machine Learning

  11. Phylogenetic Programs • Huge list at: • http://evolution.genetics.washington.edu/phylip/software.html • PAUP* - one of the most popular programs, commercial, Mac and Unix only, nice user interface • PHYLIP – free, multiplatform, a bit difficult to use but web servers make it easier • WebPhylip – another interface for PHYLIP online BCB 444/544 F07 ISU Terribilini #32- Machine Learning

  12. Phylogenetic Programs • TREE-PUZZLE – uses a heuristic to allow ML on large datasets, also available as a web server • PHYML – web based, uses genetic algorithm • MrBayes – Bayesian program, fast and can handle large datasets, multiplatform download • BAMBE – web based Bayesian program BCB 444/544 F07 ISU Terribilini #32- Machine Learning

  13. Final Comments on Phylogenetics • No method is perfect • Different methods make very different assumptions • If multiple methods using different assumptions come up with similar results, we should trust the results more than any single method BCB 444/544 F07 ISU Terribilini #32- Machine Learning

  14. Machine Learning • What is learning? • What is machine learning? • Learning algorithms • Machine learning applied to bioinformatics and computational biology • Some slides adapted from Dr. Vasant Honavar and Dr. Byron Olson BCB 444/544 F07 ISU Terribilini #32- Machine Learning

  15. What is Learning? • Learning is a process by which the learner improves his performance on a task or a set of tasks as a result of experience within some environment BCB 444/544 F07 ISU Terribilini #32- Machine Learning

  16. Types of Learning • Rote learning – useful when it is less expensive to store and retrieve some information than to compute it • Learning from instruction – transform instructions into useful knowledge • Learning from examples – extract predictive or descriptive regularities from data • Learning from deduction – generalize instances of deductive problem-solving • Learning from exploration – learn to choose actions that maximize reward BCB 444/544 F07 ISU Terribilini #32- Machine Learning

  17. What is Machine Learning? • Machine learning is an area of artificial intelligence concerned with development of techniques which allow computers to “learn” • Machine learning is a method for creating computer programs by the analysis of data sets • We understand a phenomenon when we can write a computer program that models it at the desired level of detail BCB 444/544 F07 ISU Terribilini #32- Machine Learning

  18. Contributing Disciplines • Computer Science – artificial intelligence, algorithms and complexity, databases, data mining • Statistics – statistical inference, experimental design, exploratory data analysis • Mathematics – abstract algebra, logic, information theory, probability theory • Psychology and neuroscience – behavior, perception, learning, memory, problem solving • Philosophy – ontology, epistemology, philosophy of mind, philosophy of science BCB 444/544 F07 ISU Terribilini #32- Machine Learning

  19. Machine Learning Applications • Bioinformatics and Computational Biology • Environmental Informatics • Medical Informatics • Cognitive Science • E-Commerce • Human Computer Interaction • Robotics • Engineering BCB 444/544 F07 ISU Terribilini #32- Machine Learning

  20. Machine Learning Algorithms • Many types of algorithms differing in the structure of the learning problem as well as the approach to learning used • Regression vs. Classification • Supervised vs. Unsupervised • Generative vs. Discriminative • Linear vs. Non-Linear BCB 444/544 F07 ISU Terribilini #32- Machine Learning

  21. Machine Learning Algorithms • Regression vs. Classification • Structural difference • Regression algorithms attempt to map inputs into continuous outputs (integers, real numbers, etc.) • Classification algorithms attempt to map inputs into one of a set of classes (color, cellular locations, good and bad credit risks, etc.) BCB 444/544 F07 ISU Terribilini #32- Machine Learning

  22. Machine Learning Algorithms • Supervised vs. Unsupervised • Data difference • Supervised learning involves using pairs of input/output relationships to learn an input output mapping (called labeled pairs often denoted {X,Y} • Unsupervised learning involves examining input data to find patterns (clustering) BCB 444/544 F07 ISU Terribilini #32- Machine Learning

  23. Machine Learning Algorithms • Generative vs. Discriminative • Philosophical difference • Generative models attempt to recreate or understand the process that generated the data • Discriminative models attempt to simply separate or determine the class of input data without regard to the process BCB 444/544 F07 ISU Terribilini #32- Machine Learning

  24. Machine Learning Algorithms • Linear vs. Non-linear • Modeling difference • Linear models involve only linear combinations of input variables • Non-linear models are not restricted in their form (commonly include exponentials or quadratic terms) BCB 444/544 F07 ISU Terribilini #32- Machine Learning

  25. Linear vs. Non-linear BCB 444/544 F07 ISU Terribilini #32- Machine Learning

  26. Summary of Machine Learning Algorithms • This is only the tip of the iceberg • No single algorithm works best for every application • Some simple algorithms are effective on many data sets • Better results can be obtained by preprocessing the data to suit the algorithm or adapting the algorithm to suit the characteristics of the data BCB 444/544 F07 ISU Terribilini #32- Machine Learning

  27. Measuring Performance BCB 444/544 F07 ISU Terribilini #32- Machine Learning

  28. Trade Off Between Specificity and Sensitivity • Classification threshold controls a trade off between specificity and sensitivity • High specificity – predict fewer instances with higher confidence • High sensitivity – predict more instances with lower confidence • Commonly shown as a Receiver Operating Characteristic (ROC) Curve BCB 444/544 F07 ISU Terribilini #32- Machine Learning

  29. Measuring Performance • Using any single measure of performance is problematic • Accuracy can be misleading – when 95% of examples are negative, we can achieve 95% accuracy by predicting all negative. We are 95% accurate, but 100% wrong on positive examples BCB 444/544 F07 ISU Terribilini #32- Machine Learning

  30. Machine Learning in Bioinformatics • Gene finding • Binding site identification • Protein structure prediction • Protein function prediction • Genetic network inference • Cancer diagnosis • etc. BCB 444/544 F07 ISU Terribilini #32- Machine Learning

  31. Sample Learning Scenario – Protein Function Prediction BCB 444/544 F07 ISU Terribilini #32- Machine Learning

  32. Some Examples of Algorithms • Naïve Bayes • Neural network • Support Vector Machine BCB 444/544 F07 ISU Terribilini #32- Machine Learning

  33. Predicting RNA binding sites in proteins • Problem: Given an amino acid sequence, classify each residue as RNA binding or non-RNA binding • Input to the classifier is a string of amino acid identities • Output from the classifier is a class label, either binding or not BCB 444/544 F07 ISU Terribilini #32- Machine Learning

  34. Bayes Theorem P(A) = prior probability P(A|B) = posterior probability BCB 444/544 F07 ISU Terribilini #32- Machine Learning

  35. Bayes Theorem Applied to Classification BCB 444/544 F07 ISU Terribilini #32- Machine Learning

  36. Naïve Bayes Algorithm = = = = ´ = = = P ( X x , X x ,..., X x | c 1 ) P ( c 1 ) P ( c 1 | X x ) = n n 1 1 2 2 = = = = = = ´ = P ( c 0 | X x ) P ( X x , X x ,..., X x | c 0 ) P ( c 0 ) n n 1 1 2 2 BCB 444/544 F07 ISU Terribilini #32- Machine Learning

  37. Naïve Bayes Algorithm Assign c=1 if BCB 444/544 F07 ISU Terribilini #32- Machine Learning

  38. Example ARG 6 T S K K K R Q R G S R p(X1 = T | c = 1) p(X2 = S | c = 1) … ≥ θ p(X1 = T | c = 0) p(X2 = S | c = 0) … BCB 444/544 F07 ISU Terribilini #32- Machine Learning

  39. Predictions for Ribosomal protein L15 PDB ID 1JJ2:K Actual Predicted BCB 444/544 F07 ISU Terribilini #32- Machine Learning

  40. Neural networks • The most successful methods for predicting secondary structure are based on neural networks. The overall idea is that neural networks can be trained to recognize amino acid patterns in known secondary structure units, and to use these patterns to distinguish between the different types of secondary structure. • Neural networks classify “input vectors” or “examples” into categories (2 or more). • They are loosely based on biological neurons. BCB 444/544 F07 ISU Terribilini #32- Machine Learning

  41. Biological Neurons Dendrites receive inputs, Axon gives output Image from Christos Stergiou and Dimitrios Siganos http://www.doc.ic.ac.uk/~nd/surprise_96/journal/vol4/cs11/report.html BCB 444/544 F07 ISU Terribilini #32- Machine Learning

  42. Artificial Neuron – “Perceptron” Image from Christos Stergiou and Dimitrios Siganos http://www.doc.ic.ac.uk/~nd/surprise_96/journal/vol4/cs11/report.html BCB 444/544 F07 ISU Terribilini #32- Machine Learning

  43. The perceptron X1 w1 T w2 X2 wN XN Input Threshold Unit Output The perceptron classifies the input vector X into two categories. If the weights and threshold T are not known in advance, the perceptron must be trained. Ideally, the perceptron must be trained to return the correct answer on all training examples, and perform well on examples it has never seen. The training set must contain both type of data (i.e. with “1” and “0” output). BCB 444/544 F07 ISU Terribilini #32- Machine Learning

  44. 1 1/2 0 0 The perceptron • The input is a vector X and the weights can be stored in another vector W. • The perceptron computes the dot product S = X.W • The output F is a function of S: it is often discrete (i.e. 1 or 0), in which case the function is the step function. • For continuous output, often use a sigmoid: BCB 444/544 F07 ISU Terribilini #32- Machine Learning

  45. The perceptron Training a perceptron: Find the weights W that minimizes the error function: P: number of training data Xi: training vectors F(W.Xi): output of the perceptron t(Xi) : target value for Xi Use steepest descent: - compute gradient: - update weight vector: - iterate (e: learning rate) BCB 444/544 F07 ISU Terribilini #32- Machine Learning

  46. Biological Neural Network Image from http://en.wikipedia.org/wiki/Biological_neural_network BCB 444/544 F07 ISU Terribilini #32- Machine Learning

  47. Artificial Neural Network A complete neural network is a set of perceptrons interconnected such that the outputs of some units becomes the inputs of other units. Many topologies are possible! Neural networks are trained just like perceptron, by minimizing an error function: BCB 444/544 F07 ISU Terribilini #32- Machine Learning

  48. Support Vector Machines - SVMs Image from http://en.wikipedia.org/wiki/Support_vector_machine BCB 444/544 F07 ISU Terribilini #32- Machine Learning

  49. SVM finds the maximum margin hyperplane Image from http://en.wikipedia.org/wiki/Support_vector_machine BCB 444/544 F07 ISU Terribilini #32- Machine Learning

  50. Kernel Function BCB 444/544 F07 ISU Terribilini #32- Machine Learning

More Related