1 / 28

Neural Network

Neural Network. Feature Selection - Sumit Sarkar Y7027453. Feature Selection for Classification. Given: a set of features F and a target variable T Find: minimum set F  that achieves maximum classification performance of T. Why Feature Selection.

anevay
Download Presentation

Neural Network

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Neural Network Feature Selection - Sumit Sarkar Y7027453

  2. Feature Selection for Classification • Given: a set of features F and a target variable T • Find: minimum set F that achieves maximum classification performance of T

  3. Why Feature Selection • Improve performance of classification algorithm by using useful features • Classification algorithm may not scale up to the size of the full feature set either in space or time • Remove redundant or useless features • Better understand the domain

  4. Why Feature Selection • By removing most irrelevant and redundant features from the data, feature selection helps improve the performance of learning models by: - Alleviating the effect of the problem of dimensionality. - Enhancing generalization capability. - Speeding up learning process. - Improving model interpretability.

  5. Feature Selection • Thousands to millions of low level features: select the most relevant one to build better, faster, and easierto understand learning machines. n’ m n

  6. 100 500 1000 Relief: Simba: Face Recognition • Male/female classification • 1450 images (1000 train, 450 test) • Relief is an algorithm that does not filter redundancy in feature set as you can clearly notice feature set selected on both sides of face even though they might be redundant as face is symmetric • Where as Simba algorithm take only feature from one half of the face and do not repeat them on the other side

  7. Feature Selection • Feature selection algorithms typically fall into two categories: - Feature Ranking - Subset Selection

  8. Feature Selection • Feature ranking ranks the features by a metric and eliminates all features that do not achieve an adequate score. • Subset selection searches the set of possible features for the most desirable subset.

  9. Feature Selection • Two kinds of methods: • Wrapper methods • Filter methods

  10. A Wrapper Method • Given a classifier C and a set of feature F, a wrapper method searches in the space of subsets of F, using Cross Validation to compare the performance of the trained classifier C on each tested subset.

  11. Cross-validation • It is a technique for estimating how the results of a statistical analysis will generalize to an independent data set. • It is mainly used in settings where the goal is prediction, and one wants to estimate how accurately a predictive model will perform in practice. • One round of cross-validation involves partitioning a sample of data into complementary subsets, performing the analysis on one subset ( referred to as training set), and validating the analysis on the other subset (referred to as testing set). • To reduce variability, multiple rounds of cross-validation are performed using different partitions, and the validation results are averaged over the rounds.

  12. m1 m2 m3 Feature subset assessment Split data into 3 sets: training, validation, and test set. N variables/features 1) For each feature subset, train predictor on training data. 2) Select the feature subset, which performs best on validation data. 3) Test on test data M samples

  13. A Filter Method • A filter method does not make use of C, but rather attempts to find predictive subsets of the features by making use of simple statistics computed from the empirical distribution. • Ranks features in terms of the mutual information between the features and the class label

  14. Filters vs. Wrappers • Main goal: rank subsets of useful features.

  15. Feature subset All features Predictor Filter Filters vs. Wrappers • Main goal: rank subsets of useful features.

  16. Feature subset All features Predictor Filter Multiple Feature subsets All features Predictor Wrapper Filters vs. Wrappers • Main goal: rank subsets of useful features.

  17. Feature subset All features Predictor Filter Multiple Feature subsets All features Predictor Wrapper Filters vs. Wrappers • Danger of over-fitting with intensive search! • Main goal: rank subsets of useful features.

  18. Over Fitting • Overfitting occurs when a statistical model describes random error or noise instead of the fundamental relationship. • Overfitting generally occurs when a model is excessively complex, such as having too many parameters relative to the number of observations. • A model which has been over-fit will generally have poor predictive performance, as it can exaggerate minor fluctuations in the data.

  19. Methods • Univariate method: considers one variable (feature) at a time. • Multivariate method: considers subsets of variables (features) together.

  20. Methods • Multivariate is more complicated, it is computationally expensive and its also more statistically difficult to do ! • Then why Multivariate feature selection method ?

  21. Univariate selection may fail

  22. Search Strategies • Forward selection here we start with an empty set or feature and progressively add features • Backward elimination, start with full set and progressively eliminate • GSFS (Generalized Sequential Forward Selection) - when (n-k) features are left, try all subsets of g features for trainings. More trainings at each step, but fewer steps than what are in simple sequential process. • PTA (l,r) Mehtod: plus l , take away r – at each step, run SFS l times then SBS r times. • Floating search (SFFS and SBFS): One step of SFS (resp. SBS), then SBS (resp. SFS) as long as we find better subsets than those of the same size obtained so far. Any time, if a better subset of the same size was already found, switch abruptly.

  23. Multivariate FS is complex • N features, 2N possible feature subsets!

  24. Multivariate FS is complex • Multivariate FS implies a search in the space of all possible combinations of features. • For n features, there are 2^n possible subsets of features. • This yields both to a high computational and statistical complexity. • Wrappers use the performance of a learning machine to evaluate each subset. • For Large n training 2^n learning machines is not feasible, so most wrapper a • Filters function analogously to wrappers. • This is highly complex and can be replaced by embedded methods or the nested subset methods.

  25. Eliminate useless feature(s) Performance degradation? Train SVM Embedded methods • SVM: (Support vector machine) All features No, stop! Yes, continue…

  26. In practice… • No method is universally better.

  27. Thank you !

  28. Questions • How do we determine whether or not there is performance degradation? • Are the non-linear classifiers are always better ?

More Related