1 / 44

DATA MINING APPROACH IGES 2008

DATA MINING APPROACH IGES 2008. Principe. What is Machine Learning? – Algorithms that allow computers to learn. – Unsupervised learning • E.g. cluster analysis – Supervised learning • E.g. decision trees, logistic regression Data Mining – Machine learning in large databases.

fordon
Download Presentation

DATA MINING APPROACH IGES 2008

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. DATA MINING APPROACH IGES 2008

  2. Principe • What is Machine Learning? • – Algorithms that allow computers to learn. • – Unsupervised learning • • E.g. cluster analysis • – Supervised learning • • E.g. decision trees, logistic regression • Data Mining • – Machine learning in large databases. • Pattern Recognition • – An application of machine learning to • •Speech recognition • •Face recognition • •Etc.

  3. Concept : classification Example of a « play golf dataset » Decision tree Lazy Classification humidity temperature

  4. Classification : performances measurement Confusion Matrix : a confusion matrix is a visualization tool typically used in supervised learning (in unsupervised learning it is typically called a matching matrix). Each column of the matrix represents the instances in a predicted class, while each row represents the instances in an actual class. One benefit of a confusion matrix is that it is easy to see if the system is confusing two classes (i.e. commonly mislabelling one as another).

  5. Classification : performances measurement Classification Measures: Accuracy = (TP + TN) / (TP + TN + FP + FN) What proportion of cases and controls are correctly classified? Sensitivity = TP / (TP + FN) What proportion of cases are correctly classified? Specificity = TN / (TN + FP) What proportion of controls are correctly classified? Balanced Accuracy = (Sensitivity + Specificity) / 2 TP = True Positive TN = Trus Negative

  6. Classification : performances measurement Receiver Operating Characteristic (ROC) :

  7. Classification : validation Cross-validation : • Leave One Out Cross Validation (LOOCV) • – Better for small datasets • – Unbiased estimate of prediction error • n-fold CV (e.g. 5-fold or 10-fold CV) • – Better for larger datasets • – Biased estimate of prediction error • – Lower variance • – May need to repeat several times and average results

  8. Classification : filters Statistic Filters • Minor allele frequency (e.g. > 0.1) • LD (e.g. r2< 0.9) • Chi-square • Information gain • Interaction information • Gain ratio • Principle components • ReliefF

  9. Classification : filters Knowledge Filters • Prior statistical results • Prior experimental results • Biochemical pathway • Gene Ontology • Protein-protein interactions

  10. Classification : wrapper Many search methods – Exhaustive – Hill climbing •Neural networks (NN) •Simulated annealing (SA) – Beam •Evolutionary algorithms (EA) •Genetic algorithms (GA) •Genetic programming (GP) •Estimation of Distribution Algorithms (EDA)

  11. GE 1 6 Paramétrage de la GE (taux de mutation, nombre de génération maximale,…) Erreur de classification et Erreur de prédiction 5 2 3 4 NN NN NN NN NN NN NN NN 1 2 3 NN NN NN NN Recombinaison Mutation Duplication NN NN NN NN NN NN NN NN Classification : wrapper Example of GENN : Grammatical Evolution to optimize Neural Network Etape 1 :Génération aléatoire d’une génération de réseaux de neurones. Etape 2 :Les données sont découpées en 10 parties. Les 9/10 sont utilisés pour l’apprentissage Etape 3 :Sélection des n meilleurs NN Etape 4 (GE part1):Évaluation des performances des RN sur les données d’apprentissage. Classement des méthodes en fonction de leur taux d’erreur et sélection des meilleurs RN. Etapes 5 (GE part2):Parmi les réseaux sélectionnés à l’étape 4, des phénomènes de recombinaison, de duplication et de mutation sont simulés, et créer ainsi une nouvelle génération de réseaux. Etape 6 :Evaluation du RN final sur le 1/10 des données et mesure de la capacité à classer les atteints et non-atteints.

  12. Available data mining package http://www.ailab.si/orange/ http://www.cs.waikato.ac.nz/ml/

  13. Ensemble Learning & Random Forest

  14. Machine Learning Methods • Decision trees • Linear regression • Neural networks • k-nearest neighbour • Naïve Bayesian classifiers • Support Vector Machines • Ensemble Learning Methods • Bagging, Boosting, Random Forests& RuleFit • and many more ... 14

  15. Ensemble Learning Methods Ensemble learning refers to a collection of methods that learn a target function by training a number of individual learners and combining their predictions

  16. Ensemble Learning Methods • Accuracy: a more reliable mapping can be obtained by combining the output of multiple "experts ” • Efficiency: a complex problem can be decomposed into multiple sub-problems that are easier to understand and solve (divide-and-conquer approach). Mixture of experts, ensemble feature selection. • There is no single model that works for all pattern recognition problems! (no free lunch theorem) "To solve really hard problems, we’ll have to use several different representations……. It is time to stop arguing over which type of pattern-classification technique is best……. Instead we should work at a higher level of organization and discover how to build managerial systems to exploit the different virtues and evade the different limitations of each of these ways of comparing things. " --Minsky, 1991. 16

  17. Ensemble Learning Methods • How to generate base classifiers?Generation strategy • Decision tree learning:ID3, C4.5 & CART • Instance-based learning: k-nearest neighbor • Bayesian classification: Naïve Bayes • Neural networks • Regression analysis • Clustering et.al. • How to integrate them?Integration strategy: • BAGGing = Bootstrap AGGregation (Breiman, 1996) • Boosting (Schapire and Singer, 1998) • Random Forests (Breiman, 2001)

  18. Ensemble Learning Methods Bootstrap aggregating (bagging) : meta-algorithm to improve machine learning of classification and regression models in terms of stability and classification accuracy. Bagging generates sub-sample from a standard training set by sampling examples from D uniformly and with replacement (bootstrap sample). The m models are fitted using the above m bootstrap samples and combined by averaging the output (for regression) or voting (for classification).

  19. Ensemble Learning Methods Boosting is a machine learning meta-algorithm for performing supervised learning. Most boosting algorithms consist of iteratively learning weak classifiers with respect to a distribution and adding them to a final strong classifier After a weak learner is added, the data is reweighted: examples that are misclassified gain weight and examples that are classified correctly lose weight

  20. Yes No Yes No Random Forest Tree i Tree 1 Tree 2 Tree 3 Final Classification is based on votes from all N trees Final Decision

  21. Random Forest Each tree is constructed using the following algorithm: Number of cases = N, Number of variables in the classifier = M. • Choose thenumber m of input variables to be used to determine the decision at a node of the tree; m should be much less than M. • Choose a training set for this tree by choosing N times with replacement from all N available training cases (i.e. take a bootstrap sample). Use the rest of the cases to estimate the error of the tree, by predicting their classes. • For each node of the tree, randomly choose m variables on which to base the decision at that node. Calculate the best split based on these m variables in the training set. • Each tree is fully grown and not pruned (as may be done in constructing a normal tree classifier).

  22. Random Forest Its accuracy is as good as Adaboost and sometimes better It is relatively robust to outliers and noise It is faster than bagging or boosting It gives useful internal estimates of error, strength, correlation and variable importance It is simple and can be easily parallelized

  23. Random Forest Heidema 2006 BMC Genetics

  24. Use of Entropy based Methods

  25. Information Theory Shannon Entropy Probabilité = incertitude sur un état Entropie = incertitude sur le système B.A. McKinney et al. Evaporative cooling feature selection for genotypic data involving interactions. Bioinformatics

  26. Exemple pour un système X ayant deux états possible tel que : p1 = 10-5 s1 = 5 p2 1 s2 = 0 H(X)  0 Information Theory Surprisal or Self-Entropy of event i :  si pas d’incertitude sur un système : entropie = 0

  27. Information Theory Least biased probability distribution is the one that maximizes the information entropy subject to prior information Least biased (maximum entropy) probability is uniform: ! Entropy is not disorder Sequence 1: 1111100000 p1=0.5, p0=0.5 H(sequence1) = log(2) Sequence 2: 1100010110 p1=0.5, p0=0.5 H(sequence2) = log(2) Maximum Entropy Principle (Jaynes, Physical Review 106, 620 (1957))

  28. Information Theory Mutual Information (correlation) Information = Removal of uncertainty Usually called Information Gain Uncertainty removed because of correlation between A and B Uncertainty from SNP A Uncertainty from SNP B B.A. McKinney et al. Evaporative cooling feature selection for genotypic data involving interactions. Bioinformatics

  29. Information Theory Information Gain • Consider two attributes, A and B (two SNPs), and a class label C (disease status).

  30. Information Theory Information Gain A = SNP1 B = SNP2 C = disease status • If IG(ABC) > 0 • Evidence for an attribute interaction • If IG(ABC) < 0 • The information between A and B is redundant • If IG(ABC) = 0 • Evidence of conditional independence or a mixture of synergy and redundancy

  31. Information Theory Attribute Selection based on Entropy • Entropy-based IG is estimated for each individual attribute (i.e. main effects) and each pairwise combination of attributes (i.e. interaction effects). • Pairs of attributes are sorted and those with the highest IG, or percentage of entropy in the class removed, are selected for further consideration.

  32. > CCAGAGGCCCAACUGGUAAACGGGC > CCG-AAGCUCAACGGGAUAAUGAGC > CCG-AAGCCGAACGGGAAAACCGGC > CC-CAAGCGC-AGGGGAGAA-GCGC > CCG-ACGCCA-ACGGGAGAA-UGGC > CCGUUUUCAG-UCGGGAAAAACUGA > CCGUUACUCC-UCGGGAUAAAGGAG > CCGUAAGAGG-ACGGGAUAAACCUC > CCG-UAGGAG-GCGGGAUAU-CUCC Information Theory Exemple d’application pour mesurer la qualité du séquençage Entropy Kullback-Leibler pi = probabilité observé qi =probabilité à priori (sous H0 est = 0.25 pour chaque nucléotide) Relative Uncertainty on base U for one specific position Gorodkin, et. al. Comput. Appl. Biosci., Vol. 13, no. 6 pp 583-586, 1997. Tuerk et. al. PNAS 89, pp 6988-6992, 1992 Ahola et al. A statistical score for assessing the quality of multiple sequence alignments. Bioinformatics D'haeseleer. What are DNA sequence motifs? Nature Biotechnology

  33. MDR : Multifactor Dimensionality Reduction

  34. MDR MDR Step 1 & 2 • Step 1: partition the data into some number of equal parts for cross-validation • Step 2: a set of N genetic and/or discrete environmental factors is selected from the list of all factors

  35. MDR MDR Step 3 • The N factors and their multifactor classes or cells are represented in N-dimensional space • The ratio of the number of cases to the number of controls is evaluated within each multifactor cell

  36. MDR MDR Step 4 • Each multifactor cell in N-dimensional space is labeled as high-risk if the ratio meets or exceeds some threshold T (e.g. T = 1.0) and low-risk if otherwise • Those cells labeled high-risk are in one group and those low-risk are in another group, which reduces the N-dimensional model to one dimension

  37. MDR MDR Step 5 & 6 • Step 5: all possible combinations of N factors are evaluated for their ability to classify affected and unaffected individuals in the training data, and the best N-factor model is selected. • Step 6: the independent test data from the cross-validation is used to estimate the prediction error of the best model selected.

  38. MDR overview

  39. MDR MDR final • Steps 1 through 6 are repeated for each possible cross validation interval • The final step: determine which multifactor levels (e.g. genotypes) are high risk and which are low risk using the entire dataset.

  40. MDR : New features Interpretation – Interaction Graphs • Comprised of a node for each attribute with pairwise connections between them. • Each node is labeled the percentage of entropy removed (i.e. IG) by each attribute. • Each connection is labeled the percentage of entropy removed for each pairwise Cartesian product of attributes. IG > 0 Evidence for an attribute interaction IG< 0 The information between SNP1 and SNP2 is redundant IG = 0 Evidence of conditional independence or a mixture of synergy and redundancy

  41. MDR : New features Interpretation – Interaction Graphs Attribute Interaction I(SNP1;SNP2;Class) Class Attribute effect Attribute effect SNP1 SNP2 Attribute correlation

  42. MDR : New features Interpretation – Dendrograms • Hierarchical clustering is used to build a dendrogram that places strongly interacting attributes close together at the leaves of the tree.

  43. MDR : New features

  44. MDR : New features

More Related