1 / 14

Robust Feature Selection by Mutual Information Distributions

Robust Feature Selection by Mutual Information Distributions. Marco Zaffalon & Marcus Hutter IDSIA Galleria 2, 6928 Manno (Lugano), Switzerland www.idsia.ch/~{zaffalon,marcus} {zaffalon,marcus}@idsia.ch. Mutual Information (MI). Consider two discrete random variables (  ,  )

masato
Download Presentation

Robust Feature Selection by Mutual Information Distributions

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Robust Feature Selection by Mutual Information Distributions Marco Zaffalon & Marcus Hutter IDSIA Galleria 2, 6928 Manno (Lugano), Switzerland www.idsia.ch/~{zaffalon,marcus} {zaffalon,marcus}@idsia.ch

  2. Mutual Information (MI) • Consider two discrete random variables (,) • (In)Dependence often measured by MI • Also known as cross-entropy or information gain • Examples • Inference of Bayesian nets, classification trees • Selection of relevant variables for the task at hand

  3. MI-Based Feature-Selection Filter (F)Lewis, 1992 • Classification • Predicting the class value given values of features • Features (or attributes) and class = random variables • Learning the rule ‘features  class’ from data • Filters goal: removing irrelevant features • More accurate predictions, easier models • MI-based approach • Remove feature  if class  does not depend on it: • Or: remove  if • is an arbitrary threshold of relevance

  4. O M M M M Empirical Mutual Informationa common way to use MI in practice • Data ( )  contingency table • Empirical (sample) probability: • Empirical mutual information: • Problems of the empirical approach • due to random fluctuations? (finite sample) • How to know if it is reliable, e.g. by

  5. We Need the Distribution of MI • Bayesian approach • Prior distribution for the unknown chances (e.g., Dirichlet) • Posterior: • Posterior probability density of MI: • How to compute it? • Fitting a curve by the exact mean, approximate variance

  6. Mean and Variance of MIHutter, 2001; Wolpert & Wolf, 1995 • Exact mean • Leading and next to leading order term (NLO) for the variance • Computational complexity O(rs) • As fast as empirical MI

  7. MI Density Example Graphs

  8. FF excludes BF excludes FF excludes BF includes FF includes BF includes I  Robust Feature Selection • Filters: two new proposals • FF: include feature  iff • (include iff “proven” relevant) • BF: exclude feature  iff • (exclude iff “proven” irrelevant) • Examples

  9. Learningdata Store after classification Naive Bayes Instance N Instance k+1 Instance k Classification Test instance Filter Comparing the Filters • Experimental set-up • Filter (F,FF,BF) + Naive Bayes classifier • Sequential learning and testing • Collected measures for each filter • Average # of correct predictions (prediction accuracy) • Average # of features used

  10. Results on 10 Complete Datasets • # of used features • Accuracies NOT significantly different • Except Chess & Spam with FF

  11. Results on 10 Complete Datasets - ctd

  12. FF: Significantly Better Accuracies • Chess • Spam

  13. Extension to Incomplete Samples • MAR assumption • General case: missing features and class • EM + closed-form expressions • Missing features only • Closed-form approximate expressions for Mean and Variance • Complexity still O(rs) • New experiments • 5 data sets • Similar behavior

  14. Conclusions • Expressions for several moments of MI distribution are available • The distribution can be approximated well • Safer inferences, same computational complexity of empirical MI • Why not to use it? • Robust feature selection shows power of MI distribution • FF outperforms traditional filter F • Many useful applications possible • Inference of Bayesian nets • Inference of classification trees • …

More Related