1 / 50

Redundant Feature Elimination for Multi-Class Problems

Redundant Feature Elimination for Multi-Class Problems. Annalisa Appice, Michelangelo Ceci Dipartimento di Informatica, Università degli Studi di Bari, Italy Simon Rawles, Peter Flach Department of Computer Science, University of Bristol, UK. Re dundant fe ature r eduction.

ocean-young
Download Presentation

Redundant Feature Elimination for Multi-Class Problems

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Redundant Feature Elimination for Multi-Class Problems Annalisa Appice, Michelangelo Ceci Dipartimento di Informatica, Università degli Studi di Bari, Italy Simon Rawles, Peter Flach Department of Computer Science, University of Bristol, UK

  2. Redundant feature reduction • REFER: an efficient, scalable, logic-based method for eliminating Boolean features which are redundant for multi-class classifier learning. • Why? Size of hypothesis space, predictive performance, model comprehensibility. • Distinct from feature selection.

  3. Overview of this talk • Redundant feature reduction • What is feature redundancy? • Doing multi-class reduction • Related approaches • Theoretical and experimental results • Summary • Current and future work

  4. Example: Redundancy of features A fixed number of Boolean features One of several class labels (‘multiclass’)

  5. Discriminating a against b True values in examples of class a make the feature better for distinguishing a from b in a classification rule.

  6. Discriminating a against b False values in examples of class b make the feature better for distinguishing a from b in a rule.

  7. Discriminating a against b f2covers f1 and f3 is useless. f1 and f3 are redundant. Negated features are not automatically considered.

  8. For discriminating class a examples from class b, f covers g if Ta(g)  Ta(f) and Fb(g)  Fb(f). A feature is redundant if another feature covers it. More formally... Ta(f2) = {e1, e2}. Ta(f1) = {e1}. Fb(f2) = {e4, e5}. Fb(f1) = {e5}. a is the ‘positive class’ here

  9. Neighbourhoods of examples • A way to upgrade to multi-class data. • Each class is partitioned into subsets of similar examples. • REFER-N finds non-redundant features between each neighbourhood pair in turn. • Builds up list of non-redundant features between each neighbourhood pair in turn. • Efficient, more reduction, logic-based.

  10. Neighbourhood construction

  11. Neighbourhood construction 1

  12. Neighbourhood construction 1

  13. Neighbourhood construction 1

  14. Neighbourhood construction 1 1 1 1

  15. Neighbourhood construction 2

  16. Neighbourhood construction 2

  17. Neighbourhood construction 2

  18. Neighbourhood construction 2 2 2

  19. Neighbourhood construction 3

  20. Neighbourhood construction 3 3 3 3 3

  21. Neighbourhood construction 4

  22. Neighbourhood construction 4

  23. Neighbourhood construction 5

  24. Neighbourhood construction 5 5 5

  25. Neighbourhood construction Groups of similar examples with the same class label 1 1 1 1 2 5 2 2 3 5 5 3 3 3 3 4

  26. 1 2 5 3 4 Neighbourhood comparison

  27. 1 2 5 3 4 Neighbourhood comparison

  28. 1 2 5 3 4 Neighbourhood comparison Comparing all neighbourhoods of differing class

  29. Ancestry of REFER • REDUCE (Lavrač et al. 1999) • Feature reduction for propositionalised ILP datasets • Preserves learnability of a complete and consistent hypothesis • REFER uses a variant of REDUCE • Redundant features found between the examples in each neighbourhood pair • Prefers features already found non-redundant

  30. Related multiclass filters • FOCUS for noise-free Boolean data (Almuallim & Dietterich 1991) • Exhaustive evaluation of all subsets • A time complexity of O(np) • SCRAP relevance filter (Raman 2003) • Also uses neighbourhood approach • No guarantee that selected features (still) discriminate among all classes.

  31. Theoretical results • REFER preserves the learnability of a complete and consistent theory. • If a C&C rule was in the original data, it’ll be in the reduced data. • REFER is efficient. Time complexity is • … linear in number of examples • … quadratic in number of features

  32. Experimental results • Mutagenesis data from SINUS • Feature set greatly reduced (13118  44) • Accuracy still competitive (approx. 85%)

  33. Experimental results • Thirteen UCI benchmark datasets • Compared with LVF, CFS and Relief using discrete/discretised data • Generally conservative • Faster: 8 out of 13 faster, 3 very close. • Competitive predictive accuracy using several classifiers:

  34. Experimental results • Reuters-21578 large-scale high-dimensionality sparse data • 16,582 preprocessed features were reduced to 1450. • REFER supports parallel execution well. • REFER runs in parallel on subsets of the feature set and again on the combination.

  35. Summary • A method for eliminating redundant Boolean features for multi-class classification tasks. • Uses logical coverage of examples • Efficient and scalable • requiring less time than the three feature selection algorithms we used • Amenable to parallel execution

  36. Current and future investigations • Interaction between feature selection and feature reduction • Benefits of combination • Noise handling using non-pure neighbourhoods (‘relaxed REFER’) • Overcoming sensitivity to noise • REFER for example reduction

  37. Questions

  38. Average reduction on UCI data

  39. Aud Aud Brid Brid Car Car F1C F1C F1M F1M F3C F3C F3M F3M Mus Mus Nur Nur Post Post Tic Tic Pim Pim Yea Yea Effect of choice of starting point Number of reduced features 120 100 80 60 40 20 0 Number of neighbourhoods constructed 1000 800 600 400 200 0

  40. Comparison of running times Machine spec: Pentium IV 1.4GHz PC running Windows XP

  41. Full accuracy results

  42. Setting M1 M2 M3 M4 Instances produced 1692 1692 1692 1692 Features produced 1016 2114 3986 13118 S parameters (L, V, T) 3, 3, 20 3, 3, 20 3, 3, 20 4, 4, 20 INUS inda and ind1 yes yes yes yes bonds yes yes yes yes atom element and type yes yes yes yes atom charge no yes yes yes lumo and logp no yes yes yes 2D molecular stru c tures yes no yes yes REFER for propositionalisation

  43. REFER for propositionalisation

  44. REFER for propositionalisation

  45. a) E1 c1 E c1 e1 c1 c1 c1 c1 c1 E5 c2 c2 c1 c2 E2 c3 c2 c3 c3 E4 c3 c3 E3 c3 b) E5 c1 E1 c1 c2 c2 E4 c3 E2 E3 Neighbourhoods of examples R2 analogy of neighbourhood construction Comparison between neighbourhood pairs

  46. Another simple example f2 is a useless feature - any feature can cover it.

  47. Introducing negated features … but its negation is a perfectly non-redundant feature. REFER assumes that the user will provide negated features if the language for rules requires it.

  48. Introducing negated features If all features are considered together, f2 is chosen ...

  49. Introducing negated features … but REFER considers positive against positive and negative against negative only.

More Related