1 / 33

Are we still talking about diversity in classifier ensembles?

Are we still talking about diversity in classifier ensembles?. Ludmila I Kuncheva School of Computer Science Bangor University, UK. Completely irrelevant to your Workshop. Are we still talking about diversity in classifier ensembles?. Ludmila I Kuncheva School of Computer Science

Download Presentation

Are we still talking about diversity in classifier ensembles?

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Are we still talking about diversity in classifier ensembles? Ludmila I Kuncheva School of Computer Science Bangor University, UK

  2. Completely irrelevant to your Workshop... Are we still talking about diversity in classifier ensembles? Ludmila I Kuncheva School of Computer Science Bangor University, UK

  3. Let’s talk instead of: Multi-view and classifier ensembles

  4. A classifier ensemble class label “combiner” classifier classifier classifier feature values (object description)

  5. class label combiner ensemble? classifier classifier a neural network feature values (object description)

  6. class label a fancy combiner ensemble? classifier classifier classifier classifier classifier classifier classifier feature values (object description)

  7. a fancy feature extractor class label classifier? “combiner” classifier classifier classifier feature values (object description)

  8. Why classifier ensembles then? a.because we like to complicate entities beyond necessity (anti-Occam’s razor) b. because we are lazy and stupid and can’t be bothered to design and train one single sophisticated classifier c.because democracy is so important to our society, it must be important to classification

  9. combination of multiple classifiers [Lam95,Woods97,Xu92,Kittler98] classifier fusion [Cho95,Gader96,Grabisch92,Keller94,Bloch96] mixture of experts [Jacobs91,Jacobs95,Jordan95,Nowlan91] committees of neural networks [Bishop95,Drucker94] consensus aggregation [Benediktsson92,Ng92,Benediktsson97] voting pool of classifiers [Battiti94] dynamic classifier selection [Woods97] composite classifier systems [Dasarathy78] classifier ensembles [Drucker94,Filippi94,Sharkey99] bagging, boosting, arcing, wagging [Sharkey99] modular systems [Sharkey99] collective recognition [Rastrigin81,Barabash83] stacked generalization [Wolpert92] divide-and-conquer classifiers [Chiang94] pandemonium system of reflective agents [Smieja96] change-glasses approach to classifier selection [KunchevaPRL93] etc. oldest oldest fanciest

  10. combination of multiple classifiers [Lam95,Woods97,Xu92,Kittler98] classifier fusion [Cho95,Gader96,Grabisch92,Keller94,Bloch96] mixture of experts [Jacobs91,Jacobs95,Jordan95,Nowlan91] committees of neural networks [Bishop95,Drucker94] consensus aggregation [Benediktsson92,Ng92,Benediktsson97] voting pool of classifiers [Battiti94] dynamic classifier selection [Woods97] composite classifier systems [Dasarathy78] classifier ensembles [Drucker94,Filippi94,Sharkey99] bagging, boosting, arcing, wagging [Sharkey99] modular systems [Sharkey99] collective recognition [Rastrigin81,Barabash83] stacked generalization [Wolpert92] divide-and-conquer classifiers [Chiang94] pandemonium system of reflective agents [Smieja96] change-glasses approach to classifier selection [KunchevaPRL93] etc. Out of fashion Subsumed

  11. Congratulations! The Netflix Prize sought to substantially improve the accuracy of predictions about how much someone is going to enjoy a movie based on their movie preferences. On September 21, 2009 we awarded the $1M Grand Prize to team “BellKor’s Pragmatic Chaos”. Read about their algorithm, checkout team scores on the Leaderboard, and join the discussions on the Forum. We applaud all the contributors to this quest, which improves our ability to connect people to the movies they love. class label classifier ensemble combiner classifier classifier classifier feature values (object description)

  12. class label classifier ensemble combiner classifier classifier classifier cited 7194 times by 28 July 2013 (Google Scholar) feature values (object description)

  13. Classifier combination? Hmmmm….. David J. Hand (2006) Classifier technology and the illusion of progress, Statist. Sci. 21(1), 1-14. SasoDzeroski We are kidding ourselves; there is no real progress in spite of ensemble methods. S. Dzeroski, and B. Zenko. (2004) Is combining classifiers better than selecting the best one? Machine Learning, 54, 255-273. Chances are that the single best classifier will be better than the ensemble. David Hand

  14. Quo Vadis? "combining classifiers" OR "classifier combination" OR "classifier ensembles" OR "ensemble of classifiers" OR "combining multiple classifiers" OR "committee of classifiers" OR "classifier committee" OR "committees of neural networks" OR "consensus aggregation" OR "mixture of experts" OR "bagging predictors" OR adaboost OR (( "random subspace" OR "random forest" OR "rotation forest" OR boosting) AND "machine learning")

  15. Gartner’s Hype Cycle: a typical evolution pattern of a new technology Where are we?...

  16. (6) IEEE TPAMI = IEEE Transactions on Pattern Analysis and Machine Intelligence IEEE TSMC = IEEE Transactions on Systems, Man and Cybernetics JASA = Journal of the American Statistical Association IJCV = International Journal of Computer Vision JTB = Journal of Theoretical Biology (2) PPL = Protein and Peptide Letters JAE = Journal of Animal Ecology PR = Pattern Recognition (4) ML = Machine Learning NN = Neural Networks CC = Cerebral Cortex top cited paper is from… application paper

  17. International Workshop on Multiple Classifier Systems 2000 – 2013 - continuing

  18. Levels of questions • A Combination level • selection or fusion? • voting or another combination method? • trainable or non-trainable combiner? Combiner • B Classifier level • same or different classifiers? • decision trees, neural networks or other? • how many? Classifier 1 Classifier 2 … Classifier L • CFeature level • all features or subsets of features? • random or selected subsets? Features • DData level • independent/dependent bootstrap samples? • selected data sets? Data set

  19. 50 diverse linear classifiers 50 non-diverse linear classifiers

  20. Strength of classifiers The perfect classifier ?  Large ensemble of nearly identical classifiers - REDUNDANCY • 3-8 classifiers • heterogeneous • trained combiner • (stacked generalisation) • 30-50 classifiers • same or different models? • trained or non-trained combiner? • selection or fusion? • IS IT WORTH IT? How about here? Number of classifiers L 1 Must engineer diversity…  Small ensembles of weak classifiers - INSUFFICIENCY ? • 100+ classifiers • same model • non-trained combiner • (bagging, boosting, etc.)

  21. Strength of classifiers The perfect classifier  Large ensemble of nearly identical classifiers - REDUNDANCY • 3-8 classifiers • heterogeneous • trained combiner • (stacked generalisation) • 30-50 classifiers • same or different models? • trained or non-trained combiner? • selection or fusion? • IS IT WORTH IT? Number of classifiers L 1 Must engineer diversity…  Small ensembles of weak classifiers - INSUFFICIENCY • 100+ classifiers • same model • non-trained combiner • (bagging, boosting, etc.)

  22. A classifier ensemble class label “combiner” classifier classifier classifier one view feature values (object description)

  23. A classifier ensemble class label “combiner” classifier classifier classifier multiple views feature values (object description) feature values (object description) feature values (object description)

  24. 1998

  25. “distinct” is what you call “late fusion” “shared” is what you call “early fusion”

  26. EXPRESSION OF EMOTION - MODALITIES physiological behavioural facial expression central nervous system eye tracking interaction with the computer EEG gesture peripheral nervous system fMRI speech fNIRS posture pulse rate EMG pressure on mouse pulse variation respiration skin to drag-click speed Galvanic skin response blood pressure dialogue with tutor

  27. Data Classification Strategies modality 1 (1) Concatenate the features from all modalities “early fusion” (2) Feature extraction and concatenation “mid-fusion” modality 2 (3) Straight ensemble classification “late fusion” ensemble modality 3 And many combinations thereof...

  28. Data Classification Strategies We capture all dependencies but can’t handle the complexity modality 1 (1) Concatenate the features from all modalities “early fusion” (2) Feature extraction and concatenation “mid-fusion” modality 2 (3) Straight ensemble classification “late fusion” ensemble modality 3 We lose the dependencies but can handle the complexity

  29. Ensemble Feature Selection By the ensemble (RANKERS) For the ensemble Decision tree ensembles Bootstrap ensembles of rankers Ensembles of different rankers Multiview late fusion Systematic approach Random approach Uniform (Random subspace) Incremental or iterative Non-uniform (GA) Feature selection Greedy Greedy Multiview early and mid-fusion

  30. Uniform (Random subspace) Incremental or iterative Non-uniform (GA) Feature selection Greedy Greedy Multiview early and mid-fusion

  31. This is what I think: Deciding which approach to take is rather art than science This choice is, crucially, CONTEX-SPECIFIC.

  32. Where does diversity come to this? Hmm... Nowhere...

More Related