1 / 28

A Technique for Advanced Dynamic Integration of Multiple Classifiers

A Technique for Advanced Dynamic Integration of Multiple Classifiers. Alexey Tsymbal*, Seppo Puuronen**, Vagan Terziyan* *Department of Artificial Intelligence and Information Systems, Kharkov State Technical University of Radioelectronics, UKRAINE

affrica
Download Presentation

A Technique for Advanced Dynamic Integration of Multiple Classifiers

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. A Technique for Advanced Dynamic Integration of Multiple Classifiers Alexey Tsymbal*, Seppo Puuronen**, Vagan Terziyan* *Department of Artificial Intelligence and Information Systems, Kharkov State Technical University of Radioelectronics, UKRAINE e-mail: vagan@kture.cit-ua.net, vagan@jytko.jyu.fi **Department of Computer Science and Information Systems, University of Jyvaskyla, FINLAND, e-mail: sepi@jytko.jyu.fi STeP’98 - Finnish AI Conference, 7-9 September, 1998

  2. Finland and Ukraine

  3. Metaintelligence Laboratory: Research Topics • Knowledge and metaknowledge engineering; • Multiple experts; • Context in Artificial Intelligence; • Data Mining and Knowledge Discovery; • Temporal Reasoning; • Metamathematics; • Semantic Balance and Medical Applications; • Distance Education and Virtual Universities.

  4. Contents • What is Knowledge Discovery ? • The Multiple Classifiers Problem • A Sample (Training) Set • A Sliding Exam of Classifiers as Learning Technique • A locality Principle • Nearest Neighbours and Distance Measure • Weighting Neighbours, Predicting Errors and Selecting Classifiers • Data Preprocessing • Some Examples

  5. What is Knowledge Discovery ? • Knowledge discovery in databases (KDD) is a combination of data warehousing, decision support, and data mining and it is an innovative new approach to information management. • KDD is an emerging area that considers the process of finding previously unknown and potentially interesting patterns and relations in large databases*. • __________________________________________________________________________________________________________________________________________ • * Fayyad, U., Piatetsky-Shapiro, G., Smyth, P., Uthurusamy, R., Advances in Knowledge Discovery and Data Mining, AAAI/MIT Press, 1996.

  6. The Research Problem During the past several years, in a variety of application domains, researchers in machine learning, computational learning theory, pattern recognition and statistics have tried to combine efforts to learn how to create and combine an ensemble of classifiers. The primary goal of combining several classifiers is to obtain a more accurate prediction than can be obtained from any single classifier alone.

  7. Approaches to Integrate Multiple Classifiers Integrating Multiple Classifiers Combination Selection Decontextualization Local (“Virtual” Classifier) Local (Dynamic) Global (Static) Global (Voting-Type)

  8. Classification Problem J classes, n training observations, p object features Given: n training pairs (xi, yi) with xiÎRp and yiÎ{1,…,J} denoting class membership Goal: given: new x0 select classifier for x0 predict class y0

  9. A Sample (Training) Set

  10. Classifiers Used in Example • Classifier 1: LDA - Linear Discriminant Analysis; • Classifier 2: k-NN - Nearest Neighbour Classification; • Classifier 3: DANN - Discriminant Adaptive Nearest Neighbour Classification

  11. ASliding Examof Classifiers (Jackknife Method):We apply all the classifiers to the Training Set points and check correctness of classification

  12. A Locality Principle

  13. Selecting Amount of Nearest Neighbours • A suitable amountlof nearest neighbours for a training set point should be selected, which will be used to classify case related to this point. • We have usedl = max(3, n div 50) for all training set points in the example, where n is the amount of cases in a training set. • ? Should we locally select an appropriatelvalue ?

  14. Brief Review of Distance FunctionsAccording to D. Wilson and T. Martinez (1997)

  15. Weighting Neighbours

  16. Nearest Neighbours’ Weights in the Example

  17. Selection of a Classifier DANN should be selected

  18. Compenetnce Map of Classifiers

  19. Data Preprocessing: Selecting Set of Features

  20. Features Used in Dystonia Diagnostics • AF (x1) - attack frequency; • AM0 (x2) - the mode, the index of sympathetic tone; • dX (x3) - the index of parasympathetic tone; • IVR (x4) - the index of autonomous reactance; • V (x5) - the velocity of brain blood circulation; • GPVR (x6) - the general peripheral blood-vessels’ resistance; • RP (x7) - the index of brain vessels’ resistance.

  21. Training Set for a Dystonia Diagnostics

  22. Visualizing Training Set for the Dystonia Example

  23. Evaluation of Classifiers

  24. Diagnostics of the Test Vector

  25. Experiments with Heart Disease Database • Database contains 270 instances. Each instance has 13 attributes which have been extracted from a larger set of 75 attributes. The average cross-validation errors for the three classification methods were the following: DANN 0.196, K-NN 0.352, LDA 0.156, Dynamic Classifier Selection Method 0.08

  26. Experiments with Liver Disorders Database • Database contains 345 instances. Each instance has 7 numerical attributes. The average cross-validation errors for the three classification methods were the following: DANN 0.333, K-NN 0.365, LDA 0.351, Dynamic Classifier Selection Method 0.134

  27. Experimental Comparison of Three Integration Techniques Local (Dynamic) Classifier Selection (DCS) is compared with Voting and static Cross-Validation Majority

  28. Conclusion and Future Work • Classifiers can be effectively selected or integrated due to the locality principle • The same principle can be used when preprocessing data • The amount of nearest neighbours and the way of distance measure it is reasonable decided in every separate case • The difference between classification results obtained in different contexts can be used to improve classification due to possible trends

More Related