1 / 52

Classifier Ensembles: Facts, Fiction, Faults and Future

Classifier Ensembles: Facts, Fiction, Faults and Future. Ludmila I Kuncheva School of Computer Science Bangor University, Wales, UK. 1. Facts. Classifier ensembles. class label. classifier. feature values (object description). Classifier ensembles. class label. “combiner”. classifier.

gyan
Download Presentation

Classifier Ensembles: Facts, Fiction, Faults and Future

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Classifier Ensembles: Facts, Fiction, Faults and Future Ludmila I Kuncheva School of Computer Science Bangor University, Wales, UK

  2. 1. Facts

  3. Classifier ensembles class label classifier feature values (object description)

  4. Classifier ensembles class label “combiner” classifier classifier classifier feature values (object description)

  5. Classifier ensembles class label combiner ensemble? classifier classifier a neural network feature values (object description)

  6. Classifier ensembles class label a fancy combiner ensemble? classifier combiner classifier classifier classifier classifier classifier classifier feature values (object description)

  7. Classifier ensembles a fancy feature extractor class label classifier? combiner classifier classifier classifier classifier feature values (object description)

  8. Why classifier ensembles then? a. because we like to complicate entities beyond necessity (anti-Occam’s razor) b. because we are lazy and stupid and can’t be bothered to design and train one single sophisticated classifier c. because democracy is so important to our society, it must be important to classification

  9. Classifier ensembles Juan : “I just like to combine things…”

  10. Classifier ensembles Juan : “I just like combining things…”

  11. Classifier ensembles combination of multiple classifiers [Lam95,Woods97,Xu92,Kittler98] classifier fusion [Cho95,Gader96,Grabisch92,Keller94,Bloch96] mixture of experts [Jacobs91,Jacobs95,Jordan95,Nowlan91] committees of neural networks [Bishop95,Drucker94] consensus aggregation [Benediktsson92,Ng92,Benediktsson97] voting pool of classifiers [Battiti94] dynamic classifier selection [Woods97] composite classifier systems [Dasarathy78] classifier ensembles [Drucker94,Filippi94,Sharkey99] bagging, boosting, arcing, wagging [Sharkey99] modular systems [Sharkey99] collective recognition [Rastrigin81,Barabash83] stacked generalization [Wolpert92] divide-and-conquer classifiers [Chiang94] pandemonium system of reflective agents [Smieja96] change-glasses approach to classifier selection [KunchevaPRL93] etc. oldest oldest fanciest

  12. ≈ 1 c The method of collective recognition Moscow, Energoizdat, 1981

  13. classifier ensemble classifier selection(regions of competence) weighted majority vote

  14. Collective statistical decisions in [pattern] recognition Moscow, Radio I svyaz’, 1983

  15. weighted majority vote

  16. This superb graph was borrowed from “Fuzzy models and digital signal processing (for pattern recognition): Is this a good marriage?”, Digital Signal Processing, 3, 1993, 253-270, by my good friend Jim Bezdek. Peak of hype Expectation Asymptote of reality 1965 1970 1975 1980 1985 1993 Naive euphoria Overreaction to immature technology True user benefit Depth of Cynicism

  17. So where are we? Peak of hype Expectation Asymptote of reality 1965 1970 1975 1980 1985 1993 Naive euphoria Overreaction to immature technology True user benefit Depth of Cynicism

  18. So where are we? Peak of hype Expectation Asymptote of reality 1 2 3 4 5 1978 2008 2008 2008 2008 2008 Naive euphoria Overreaction to immature technology True user benefit Depth of Cynicism

  19. To make the matter worse... half full Expert 1: J. Ghosh Forum: 3rd International Workshop on Multiple Classifier Systems, 2002 (invited lecture) Quote: “... our current understanding of ensemble-type multiclassifier systemsis now quite mature...” Expert 2:T.K. Ho Forum:Invited book chapter, 2002 Quote: “Many of the above questions are there becausewe do not yet have a scientific understandingof the classifier combination mechanisms” half empty

  20. Number of publications (13 Nov 2008) 1. Classifier ensembles 2. AdaBoost – (1) 3. Random Forest – (1) – (2) 4. Decision Templates – (1) – (2) – (3) incomplete for 2008

  21. Number of publications (13 Nov 2008) 1. Classifier ensembles 2. AdaBoost – (1) 3. Random Forest – (1) – (2) 4. Decision Templates – (1) – (2) – (3) incomplete for 2008 Literature “One cannot embrace the unembraceable.” Kozma Prutkov

  22. ICPR 2008 984 papers ~2000 words in the titles first 2 principal components feature local select image segment feature video track object classifier ensembles

  23. So where are we? Peak of hype still here… somewhere… Expectation Asymptote of reality 1 2 3 4 5 1978 2008 2008 2008 2008 2008 Naive euphoria Overreaction to immature technology True user benefit Depth of Cynicism

  24. 2. Fiction

  25. Fiction? • Diversity. • Diverse ensembles are better ensembles? • Diversity = independence? • Adaboost. • “The best off-the-shelf classifier”?

  26. Minority Report - a science fiction short story by Philip K. Dick first published in 1956. It is about a future society where murders are prevented through the efforts of three mutants (“precogs”) who can see two weeks ahead in the future. The story was made into a popular film in 2002. • Each of the three “precogs” generates its own report or prediction. The three reports of are analysed by a computer. Classifier Ensemble • If these reports differ from one another, the computer identifies the two reports with the greatest overlap and produces a "majority report," taking this as the accurate prediction of the future. • But the existence of majority reports implies the existence of a "minority report."

  27. Wrong Correct And, of course, the most interesting case is when the classifiers disagree – the minority report. Diversity is good 3 classifiers individual accuracy = 10/15 = 0.667 independent classifiers ensemble accuracy (majority vote) = 11/15 = 0.733 identical classifiers ensemble accuracy (majority vote) = 10/15 = 0.667

  28. dependent classifiers 1 ensemble accuracy (majority vote) = 7/15 = 0.467 dependent classifiers 2 ensemble accuracy (majority vote) = 15/15 = 1.000 Myth: Independence is the best scenario. Myth: Diversity is always good. identical 0.667 independent 0.733 dependent 1 0.467 dependent 2 1.000 worse than individual better than independence

  29. Example • The set-up • UCI data repository • “heart” data set • First 9 features; all 280 different partitions into [3, 3, 3] • Ensemble of 3 linear classifiers • Majority vote • 10-fold cross-validation • What we measured: • Individual accuracies of the ensemble members • The ensemble accuracy • The ensemble diversity (just one of all these measures…)

  30. Example minimum individual accuracy ensemble is better average individual accuracy Ensemble accuracy maximum individual accuracy Individual accuracy 280 ensembles

  31. Example ? Ensemble accuracy less accurate more diverse diversity

  32. Example

  33. Example expected large ensemble accuracy Individual accuracy large diversity

  34. AdaBoost is everything Swiss Army Knife Surely, there is more to combining classifiers than Bagging and AdaBoost AdaBoost Russian Army Knife AdaBoost AdaBoost Bagging AdaBoost AdaBoost AdaBoost AdaBoost

  35. “This altogether gives a very bad impression of ill-conceived experiments and confusing and unreliable conclusions. ... The current spotty conclusions are incomprehensible, and are of no generalization or reference value.” “This is a potentially great new method and any experimental analysis would be very useful for understanding its potential. Good study, with very useful information in the Conclusions.” Example – Rotation Forest

  36. % of data sets (out of 32) where the respective ensemble method is best Ensemble size

  37. So, no, AdaBoost is NOT everything

  38. 3. Faults

  39. OUR faults!  • Complacent: We don’t care about terminology. • Vain: To get publications, we invent complex models for simple problems or, worse even, complex non-existent problems. • Untidy: There is little effort to systemise the area. • Ignorant and lazy: By virtue of ignorance we tackle problems well and truly solved by others. Krassi’s motto “I don’t have time to read papers because I am busy writing them”. • Haughty: Simple things that work do not impress us until they get proper theoretical proofs. 

  40. God, seeing what the people were doing, gave each person a different language to confuse them and scattered the people throughout the earth… Terminology • Pattern recognition land • Data mining kingdom • Machine learning ocean • Statistics underworld and… • Weka… image taken from http://en.wikipedia.org/wiki/Tower_of_Babel

  41. AODE object instance SVM J48 attribute variable classifier ensemble nearest neighbour C4.5 hypothesis example decision tree learner observation data point feature classifier lazy learner SMO naïve Bayes

  42. ML classifier learner hypothesis Stats decision tree C4.5 J48 Weka naïve Bayes AODE SVM SMO nearest neighbour lazy learner classifier ensemble meta learner object data point instance example observation feature variable attribute

  43. Classifier ensembles - names combination of multiple classifiers [Lam95,Woods97,Xu92,Kittler98] classifier fusion [Cho95,Gader96,Grabisch92,Keller94,Bloch96] mixture of experts [Jacobs91,Jacobs95,Jordan95,Nowlan91] committees of neural networks [Bishop95,Drucker94] consensus aggregation [Benediktsson92,Ng92,Benediktsson97] voting pool of classifiers [Battiti94] dynamic classifier selection [Woods97] composite classifier systems [Dasarathy78] classifier ensembles [Drucker94,Filippi94,Sharkey99] bagging, boosting, arcing, wagging [Sharkey99] modular systems [Sharkey99] collective recognition [Rastrigin81,Barabash83] stacked generalization [Wolpert92] divide-and-conquer classifiers [Chiang94] pandemonium system of reflective agents [Smieja96] change-glasses approach to classifier selection [KunchevaPRL93] etc. Out of fashion Subsumed

  44. United terminology! Yey! combination of multiple classifiers [Lam95,Woods97,Xu92,Kittler98] classifier ensembles [Drucker94,Filippi94,Sharkey99] MCS – Multiple Classifier Systems Workshops 2000-2009

  45. Simple things that work… We detest simple things that work well for an unknown reason!!!

  46. Simple things that work… We detest simple things that work well for an unknown reason!!! Ideal scenario… Real  hijacked by heuristics…  theory HEURISTICS empirics and applications flagship of THEORY…

  47. Lessons from the past: • Fuzzy sets • stability of the system? • reliability? • optimality? • why not probability? • Who cares?... • temperature for washing machine programmes • automatic focus in digital cameras • ignition angle of internal combustion in cars • Because it is • computationally simpler (faster) • easier to build, interpret and maintain Learn to trust heuristics and empirics…

  48. 4. Future

  49. Future Branch out ? Multiple instance learning Non i.i.d. examples Skewed class distributions Noisy class labels Expectation Sparse data Asymptote of reality Non-stationary data 1978 2008 • classifier ensembles for changing environments • classifier ensembles for change detection

More Related