1 / 36

All classification methods we have studied so far use all genes/features

CZ5225: Modeling and Simulation in Biology Lecture 8: Microarray disease predictor-gene selection by feature selection methods Prof. Chen Yu Zong Tel: 6874-6877 Email: phacyz@nus.edu.sg http://bidd.nus.edu.sg Room 07-24, level 8, S17, National University of Singapore. Gene selection?.

alea-duke
Download Presentation

All classification methods we have studied so far use all genes/features

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CZ5225: Modeling and Simulation in BiologyLecture 8: Microarray disease predictor-gene selection by feature selection methods Prof. Chen Yu ZongTel: 6874-6877Email: phacyz@nus.edu.sghttp://bidd.nus.edu.sgRoom 07-24, level 8, S17, National University of Singapore

  2. Gene selection? • All classification methods we have studied so far use all genes/features • Molecular biologists/oncologists seem to be convinced that only a small subset of genes are responsible for particular biological properties, so they want the genes most important in discriminating disease-types and treatment outcomes • Practical reasons, a clinical device with thousands of genes is not financially practical

  3. Disease Example: Childhood Leukemia • Cancer in the cells of the immune system • Approx. 35 new cases in Denmark every year • 50 years ago – all patients died • Today – approx. 78% are cured • Risk Groups: • Standard • Intermediate • High • Very high • Extra high • Treatment: • Chemotherapy • Bone marrow transplantation • Radiation

  4. Risk Classification Today Patient: Clinical data Immunopheno-typing Morphology Genetic measurements Prognostic factors: Immunophenotype Age Leukocyte count Number of chromosomes Translocations Treatment response Risk group: Standard Intermediate High Very high Extra high Microarray technology

  5. Study and Diagnosis of Childhood Leukemia • Diagnostic bone marrow samples from leukemia patients • Platform: Affymetrix Focus Array • 8793 human genes • Immunophenotype • 18 patients with precursor B immunophenotype • 17 patients with T immunophenotype • Outcome 5 years from diagnosis • 11 patients with relapse • 18 patients in complete remission

  6. Problem: Too much data! Gene Pat1 Pat2 Pat3 Pat4 Pat5 Pat6 Pat7 Pat8 Pat9 209619_at 7758 4705 5342 7443 8747 4933 7950 5031 5293 32541_at 280 387 392 238 385 329 337 163 225 206398_s_at 1050 835 1268 1723 1377 804 1846 1180 252 219281_at 391 593 298 265 491 517 334 387 285 207857_at 1425 977 2027 1184 939 814 658 593 659 211338_at 37 27 28 38 33 16 36 23 31 213539_at 124 197 454 116 162 113 97 97 160 221497_x_at 120 86 175 99 115 80 83 119 66 213958_at 179 225 449 174 185 203 186 185 157 210835_s_at 203 144 197 314 250 353 173 285 325 209199_s_at 758 1234 833 1449 769 1110 987 638 1133 217979_at 570 563 972 796 869 494 673 1013 665 201015_s_at 533 343 325 270 691 460 563 321 261 203332_s_at 649 354 494 554 710 455 748 392 418 204670_x_at 5577 3216 5323 4423 5771 3374 4328 3515 2072 208788_at 648 327 1057 746 541 270 361 774 590 210784_x_at 142 151 144 173 148 145 131 146 147 204319_s_at 298 172 200 298 196 104 144 110 150 205049_s_at 3294 1351 2080 2066 3726 1396 2244 2142 1248 202114_at 833 674 733 1298 862 371 886 501 734 213792_s_at 646 375 370 436 738 497 546 406 376 203932_at 1977 1016 2436 1856 1917 822 1189 1092 623 203963_at 97 63 77 136 85 74 91 61 66 203978_at 315 279 221 260 227 222 232 141 123 203753_at 1468 1105 381 1154 980 1419 1253 554 1045 204891_s_at 78 71 152 74 127 57 66 153 70 209365_s_at 472 519 365 349 756 528 637 828 720 209604_s_at 772 74 130 216 108 311 80 235 177 211005_at 49 58 129 70 56 77 61 61 75 219686_at 694 342 345 502 960 403 535 513 258 38521_at 775 604 305 563 542 543 725 587 406

  7. So, what do we do? • Reduction of dimensions • Principle Component Analysis (PCA) • Feature selection (gene selection) • Significant genes: t-test • Selection of a limited number of genes

  8. Principal Component Analysis (PCA) • Used for visualization of complex data • Developed to capture as much of the variation in data as possible • Generic features of principal components • summary variables • linear combinations of the original variables • uncorrelated with each other • capture as much of the original variance as possible

  9. Principal components • principal component (PC1) • the direction along which there is greatest variation • principal component (PC2) • the direction with maximum variation left in data, orthogonal to the direction (i.e. vector) of PC1 • principal component (PC3) • the direction with maximal variation left in data, orthogonal to the plane of PC1 and PC2 • (Less frequently used)

  10. Example: 3 dimensions => 2 dimensions

  11. PCA - Example

  12. PCA on all GenesLeukemia data, precursor B and T • Plot of 34 patients, 8973 dimensions (genes) reduced to 2

  13. Ranking of PCs and Gene Selection

  14. The t-test method • Compares the means ( & ) of two data sets • tells us if they can be assumed to be equal • Can be used to identify significant genes • i.e. those that change their expression a lot!

  15. PCA on 100 top significant genes based on t-test • Plot of 34 patients, 100 dimensions (genes) reduced to 2

  16. The next question: Can we classify new patients? • Plot of 34 patients, 100 dimensions (genes) reduced to 2 ???? P99.??

  17. Feature Selection Problem Statement • A process of selecting a minimum subset of features that is sufficient to construct a hypothesis consistent with the training examples (Almuallim and Dietterich, 1991) • Selecting a minimum subset G such that P(C|G) is equal or as close as possible to P(C|F) (Koller and Sahami, 1996)

  18. Feature Selection Strategies • Wrapper methods • Relying on a predetermined classification algorithm • Using predictive accuracy as goodness measure • High accuracy, computationally expensive • Filter methods • Separating feature selection from classifier learning • Relying on general characteristics of data (distance, correlation, consistency) • No bias towards any learning algorithm, fast • Embedded methods • Jointly or simultaneously train both a classifier and a feature subset by optimizing an objective function that jointly rewards accuracy of classification and penalizes use of more features.

  19. Feature Selection Strategies • Filter methods • Features (genes) are scored according to the evidence of predictive power and then are ranked. Top s genes with high score are selected and used by the classifier. • Scores: t-statistics, F-statistics, signal-noise ratio, … • The # of features selected, s, is then determined by cross validation. • Advantage: Fast and easy to interpret.

  20. Feature Selection Strategies • Problems of filter methods • Genes are considered independently. • Redundant genes may be included. • Some genes jointly with strong discriminant power but individually with weak contribution will be ignored. • The filtering procedure is independent to the classifying method.

  21. Feature Selection Step-wise variable selection: One feature vs. N features n*<N effective variables modeling the classification function N features … … Step 1 Step N N steps

  22. Feature Selection Step-wise selection of the features. Ranked Features Discarded Features Steps

  23. Feature Selection Strategies • Wrapper methods • Iterative search: many “feature subsets” are scored base on classification performance and the best is used. • Subset selection: Forward selection, backward selection, their • combinations. • The problem is very similar to variable selection in regression.

  24. Feature Selection Strategies • Wrapper methods • Analogous to variable selection in regression • Exhaustive searching is not impossible, and greedy algorithms are used instead. • Confounding problem can happen in both scenario. In regression, it is usually recommended not to include highly correlated covariates in analysis to avoid confounding. But it’s impossible to avoid confounding in feature selection of microarray classification.

  25. Feature Selection Strategies • Problems of wrapper methods • Computationally expensive: for each feature subset considered, the classifier is built and evaluated. • Exhaustive searching is impossible. Greedy search only. • Easy to overfit.

  26. Feature Selection Strategies • Embedded methods • Attempt to jointly or simultaneously train both a classifier and a feature subset. • Often optimize an objective function that jointly rewards accuracy of classification and penalizes use of more features. • Intuitively appealing • Examples: nearest shrunken centroids, CART and other tree-based algorithms.

  27. Feature Selection Strategies • Example of wrapper methods • Recursive Feature Elimination (RFE) • Train the classifier with SVM. (or LDA) • Compute the ranking criterion for all features • Remove the feature with the smallest ranking criterion. • Repeat step 1~3.

  28. Feature Ranking • Weighting and ranking individual features • Selecting top-ranked ones for feature selection • Advantages • Efficient: O(N) in terms of dimensionality N • Easy to implement • Disadvantages • Hard to determine the threshold • Unable to consider correlation between features

  29. Leave-one out method

  30. Basic idea • Use leave-one out (LOO) criterion or upper bound on LOO to select features by searching over all possible subsets of n features for the ones that minimizes the criterion. • When such a search is impossible because of too many possibilities, scale each feature by a real value variable and compute this scaling via gradient descent on the leave-one out bound. One can then keep the features corresponding to the largest scaling variables.

  31. R2/M2 =1 R2/M2 >1 R M = R M x2 x2 x1 Illustration Rescale features to minimize the LOO bound R2/M2

  32. Three upper bounds on LOO Radius margin bound: simple to compute, continuous very loose but often tracks LOO well Jaakkola Haussler bound: somewhat tighter, simple to compute, discontinuous so need to smooth, valid only for SVMs with no b term Span bound: tight as a Britney Spears outfit complicated to compute, discontinuous so need to smooth

  33. Radius margin bound

  34. Jaakkola-Haussler bound

  35. Span bound

  36. Classification function with gene selection We add a scaling parameter s to the SVM, which scales genes, genes corresponding to small sj are removed. The SVM function has the form:

More Related