1 / 58

Instance Selection

Instance Selection. Instance Selection. Introduction Training Set Selection vs. Prototype Selection Prototype Selection Taxonomy Description of Methods Related and Advanced Topics Experimental Comparative Analysis in PS. Instance Selection. Introduction

jcaudill
Download Presentation

Instance Selection

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Instance Selection

  2. Instance Selection • Introduction • Training Set Selection vs. Prototype Selection • Prototype Selection Taxonomy • Description of Methods • Related and Advanced Topics • Experimental Comparative Analysis in PS

  3. Instance Selection • Introduction • Training Set Selection vs. Prototype Selection • Prototype Selection Taxonomy • Description of Methods • Related and Advanced Topics • Experimental Comparative Analysis in PS

  4. Introduction • Instanceselection (IS) performsthecomplementaryprocessregardingtheFS. • Themajorissue of scalingdownthe data istheselectionoridentificationof relevantdata fromanimmense pool of instances. • InstanceSelectionistochoose a subset of data toachievethe original purpose of a DM application as ifthewhole data wereused.

  5. Introduction • IS vs. data sampling: intelligent operation of instance categorization, according to a degree of irrelevance or noise. • The optimal outcome of IS is a minimum data subset, model independent that can accomplish the same task with no performance loss. • IS method on obtaining a subset S ⊂ T such that S does not contain superfluous instances, Acc(S) is similar to Acc(T), where Acc(X) is the classification accuracy obtained using X as a training set.

  6. Introduction • IS has the following outstanding functions: • Enabling: IS enables a DM algorithm to work with huge data. • Focusing: a concrete DM task is focused on only one aspect of interest of the domain. IS focus the data on the relevant part. • Cleaning: redundant as well as noisy instances are usually removed, improving the quality of the input data.

  7. Instance Selection • Introduction • Training Set Selection vs. Prototype Selection • Prototype Selection Taxonomy • Description of Methods • Related and Advanced Topics • Experimental Comparative Analysis in PS

  8. Training Set Selection vs. Prototype Selection • Several terms have been used for selecting the most relevant data from the training set. • Instance selection is the most general one and was thought to work with other learning methods, such as decision trees, ANNs or SVMs. • Nowadays, there are two clear distinctions in the literature: Prototype Selection (PS) and Training Set Selection (TSS).

  9. Training Set Selection vs. Prototype Selection Prototype Selection

  10. Training Set Selection vs. Prototype Selection Training Set Selection

  11. Instance Selection • Introduction • Training Set Selection vs. Prototype Selection • Prototype Selection Taxonomy • Description of Methods • Related and Advanced Topics • Experimental Comparative Analysis in PS

  12. Prototype Selection Taxonomy • Common properties: • Direction of Search. • Incremental. An incremental search begins with an empty subset S, and adds each instance in T R to S if it fulfills some criteria. • Decremental. The decremental search begins with S = T R, and then searches for instances to remove from S • Batch. Decremental removing more than one instance. • Mixed. begins with a pre-selected subset S and iteratively can add or remove any instance which meets the specific criterion • Fixed. The number of instances is user-defined.

  13. Prototype Selection Taxonomy • Common properties: • Type of Selection. • Condensation. retain the points which are closer to the decision boundaries, also called border points. • Edition. seek to remove border points. They remove points that are noisy or do not agree with their neighbors. • Hybrid. They try to find the smallest subset S which maintains or even increases the generalization accuracy in test data.

  14. Prototype Selection Taxonomy • Common properties: • Evaluation of Search. • Filter. When the kNN rule is used for partial data to determine the criteria of adding or removing and no leave-one-out validation scheme is used to obtain a good estimation of generalization accuracy. • Wrapper. When the kNN rule is used for the complete training set with the leave-one-out validation scheme. The conjunction in the use of the two mentioned factors allows us to get a great estimation of generalization accuracy, which helps to obtain better accuracy over test data.

  15. Prototype Selection Taxonomy • Common properties: • Criteria to Compare PS Methods. • Storage reduction • Noise tolerance • Generalization accuracy • Time requirements

  16. Prototype Selection Taxonomy Prototype Selection Methods (1):

  17. Prototype Selection Taxonomy Prototype Selection Methods (2):

  18. Prototype Selection Taxonomy Prototype Selection Methods (3):

  19. Prototype Selection Taxonomy Prototype Selection Methods (4):

  20. Prototype Selection Taxonomy Prototype Selection Methods (5):

  21. Prototype Selection Taxonomy Prototype Selection Methods (6):

  22. Prototype Selection Taxonomy Taxonomy

  23. Instance Selection • Introduction • Training Set Selection vs. Prototype Selection • Prototype Selection Taxonomy • Description of Methods • Related and Advanced Topics • Experimental Comparative Analysis in PS

  24. Description of Methods Condensation Condensed Nearest Neighbor (CNN) — This algorithm finds a subset S of the training set TR such that every member of TR is closer to a member of S of the same class than to a member of S of a different class. It begins by randomly selecting one instance belonging to each output class from TR and putting them in S.Then each instance inT Ris classified using only the instances in S. If an instance is misclassified, it is added to S, thus ensuring that it will be classified correctly. This process is repeated until there are no instances in T R that are misclassified.

  25. Description of Methods Condensation Fast Condensed Nearest Neighbor family (FCNN) — The FCNN1 algorithm starts by introducing in S the centroids of each class. Then, for each prototype p in S, its nearest enemy inside its Voronoi region is found, and add to S. This process is performed iteratively until no enemies are found on a single iteration. The FCNN2 algorithm is similar to FCNN1 but, instead of adding the nearest enemy on each Voronoi region, is added the centroid of the enemies found in the region. The FCNN3 algorithm is similar to FCNN1 but, instead of adding one prototype per region in each iteration, only one prototype is added (the one which belongs to the Voronoi region with most enemies). In FCNN3, S is initialized only with the centroid of the most populated Class.

  26. Description of Methods Condensation Reduced Nearest Neighbor (RNN) — RNN starts with S = TR and removes each instance from S if such a removal does not cause any other instances in TR to be misclassified by the instances remaining in S. It will always generate a subset of the results of CNN algorithm.

  27. Description of Methods Condensation Patterns by Ordered Projections (POP) — This algorithm consists of eliminating the examples that are not within the limits of the regions to which they belong. For it, each attribute is studied separately, sorting and increasing a value, called weakness, associated to each one of the instances, if it is not within a limit. The instances with a value of weakness equal to the number of attributes are eliminated.

  28. Description of Methods Edition Edited Nearest Neighbor (ENN) — Wilson developed this algorithm which starts with S = TR and then each instance in S is removed if it does not agree with the majority of its k nearest neighbors.

  29. Description of Methods Edition Multiedit — This method proceeds as follows.

  30. Description of Methods Edition Relative Neighborhood Graph Edition (RNGE) —

  31. Description of Methods Edition All KNN — All KNN is an extension of ENN. The algorithm, for i = 0 to k flags as bad any instance not classified correctly by its i nearest neighbors. When the loop is completed k times, it removes the instances flagged as bad.

  32. Description of Methods Hybrid Instance-Based Learning Algorithms Family (IB3) — The IB3 algorithm proceeds as follows:

  33. Description of Methods Hybrid Decremental Reduction Optimization Procedure Family (DROP) — Each instance Xi has k nearest neighbors where k is typically a small odd integer. Xi also has a nearest enemy, which is the nearest instance with a different output class. Those instances that have xi as one of their k nearest neighbors are called associates of Xi.

  34. Description of Methods Hybrid • DROP1

  35. Description of Methods Hybrid • DROP2: In thismethod, theremovalcriterion can be restated as: Remove Xiif at least as many of itsassociates in TR would be classifiedcorrectlywithout Xi. Usingthismodification, eachinstance Xi in the original training set TR continuestomaintain a list of its k + 1 nearestneighbors in S, evenafter Xiis removed from S. DROP2 alsochangestheorder of removal of instances. Itinitiallysortstheinstancesin S bythedistancetotheirnearestenemy. • DROP3:Itis a combination of DROP2 and ENN algorithms. DROP3 uses a noise-filteringpassbeforesortingtheinstances in S (Wilson ENN editing). Afterthis, itworksidenticallyto DROP2.

  36. Description of Methods Hybrid Iterative Case filtering (ICF) — ICF defines local set L(x) which contain all cases inside largest hypersphere centered in xisuch that the hypersphere contains only cases of the same class as instance xi. Authors define two properties, reachability and coverage: In the first phase ICF uses the ENN algorithm to remove the noise from the training set. In the second phase the ICF algorithm removes each instance Xi for which the Reachabili ty(Xi) is bigger than the Coverage(Xi). This procedure is repeated for each instance in TR. After that ICF recalculates reachability and coverage properties and restarts the second phase.

  37. Description of Methods Hybrid Random Mutation Hill Climbing (RMHC) — It randomly selects a subset S from TR which contains a fixed number of instances s (s = %|T R|). In each iteration, the algorithm interchanges an instance from S with another from TR - S. The change is maintained if it offers better accuracy.

  38. Description of Methods Hybrid Steady-state memetic algorithm (SSMA) —

  39. Instance Selection • Introduction • Training Set Selection vs. Prototype Selection • Prototype Selection Taxonomy • Description of Methods • Related and Advanced Topics • Experimental Comparative Analysis in PS

  40. Related and Advanced Topics Prototype Generation Prototype generation methods are not limited only to select examples from the training set. They could also modify the values of the samples, changing their position in the d-dimensional space considered. Most of them usemerging or divide and conquer strategies to set new artificial samples, or are based on clustering approaches, Learning Vector Quantization hybrids, advanced proposals and evolutionary algorithms based schemes.

  41. Related and Advanced Topics Distance Metrics, Feature Weighting and Combinations with Feature Selection This area refers to the combination of IS and PS methods with other well-known schemes used for improving accuracy in classification problems. For example, the weighting scheme combines the PS with the FS or Feature Weighting, where a vector of weights associated with each attribute determines and influences the distance computations.

  42. Related and Advanced Topics Hybridizations with Other Learning Methods and Ensembles This family includes all the methods which simultaneously use instances and rules in order to compute the classification of a new object. If the values of the object are within the range of a rule, its consequent predicts the class; otherwise, if no rule matches the object, the most similar rule or instance stored in the data base is used to estimate the class. This area also refers to ensemble learning, where an IS method is run several times and a classification decision is made according to the majority class obtained over several subsets and any performance measure given by a learner.

  43. Related and Advanced Topics Scaling-Up Approaches One of the disadvantages of the IS methods is that most of them report a prohibitive run time or even cannot be applied over large size data sets. Recent improvements in this field cover the stratification of data and the development of distributed approaches for PS.

  44. Related and Advanced Topics Data Complexity This area studies the effect on the complexity of data when PS methods are applied previous to the classification or how to make a useful diagnosis of the benefits of applying PS methods taking into account the complexity of the data.

  45. Instance Selection • Introduction • Training Set Selection vs. Prototype Selection • Prototype Selection Taxonomy • Description of Methods • Related and Advanced Topics • Experimental Comparative Analysis in PS

  46. Experimental Comparative Analysis in PS Framework 10-FCV. Parameters recommended by the authors of the algorithms. Euclidean distance. Three runs for stochastic methods. 42 PS methods involved. 39 small data sets. 19 medium data sets. Reduction, Accuracy and Kappa and Time as evaluation measures.

  47. Experimental Comparative Analysis in PS Results in small data sets (1)

  48. Experimental Comparative Analysis in PS Results in small data sets (2)

  49. Experimental Comparative Analysis in PS Analysis in small data sets • Bestcondensationmethods: FCNN and MCNN in incremental and RNN and MSS in decremental • Besteditionmethods: ENN, RNGE and NCNEditobtainthebestresults in accuracy/kappa and MENN and ENNThoffers a goodtradeoffconsideringthereductionrate. • Besthybridmethods: CPruner, HMNEI, CCIS, SSMA, CHC and RMHC. • Best global methods: in terms of accuracyor kappa are MoCS, RNGE and HMNEI. Consideringthetradeoffreduction-accuracy/kappa are RMHC, RNN, CHC, Explore and SSMA.

  50. Experimental Comparative Analysis in PS Results in medium data sets

More Related