1 / 17

A Robust Bagging Method using Median as a Combination Rule

A Robust Bagging Method using Median as a Combination Rule. Zaman Md. Faisal and Hideo Hirose Department of Information Design and Informatics Kyushu Institute of Technology Fukuoka, Japan. Contents of the Study. Bagging Algorithm Comparative view of Bagging and Bragging Procedure

faith
Download Presentation

A Robust Bagging Method using Median as a Combination Rule

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. A Robust Bagging Method using Median as a Combination Rule Zaman Md. Faisal and Hideo Hirose Department of Information Design and Informatics Kyushu Institute of Technology Fukuoka, Japan

  2. Contents of the Study • Bagging Algorithm • Comparative view of Bagging and Bragging Procedure • Nice bagging Algorithm • Robust Bagging (Robag) Procedure. • Classifiers used in the study • Datasets used in the study • Relative Improvement Measure • Results based on different classifiers • Conclusion • Reference 2

  3. Objectives and Features of the Study Objectives of the study: 1)Propose a robust bagging algorithm which can perform a) Comparatively well with Linear Classifiers (FLDA, NMC). b) Overcome the overfitting problem. Feature of the study: 1) A new bagging algorithm named, Robust Bagging (Robag)”. 2) A Relative Improvement Measure [R.I.M] to measure the relative improvement of bagging algorithms over the base classifiers. 3) Comparison of four bagging algorithms, s.t., bagging[1],bragging [2], nice bagging [3] and robag using the RIM.

  4. Standard Bagging Algorithm • Proposed in 1994 by Leo Breiman. • The Bagging Algorithm • 1. Create B bootstrap replicates of the dataset. • 2. Fit a model to each of the replicates. • 3. Average (or vote) the predictions of the B models. • In each Bootstrapped sample 63% of the data is sampled, and • 37% is unsampled (these samples are called out-of-bag • samples). • To exploit this variation the base classifier should be unstable. • The examples of unstable classifier are Decision Tree, • Neural Network, MARS e.t.c. 4

  5. Bragging Algorithm Bragging (Bootstrap Robust Aggregating) was proposed by P. Bühlman in 2003. In Bragging : 1) A Robust Location Estimator , “Median” is used instead of mean to combine the multiple classifiers. 2) It was proposed to improve the performance of MARS (Multivariate Adaptive Regression Spline). 3) In the case of CART, it performs quite similar to Bagging. 5

  6. Bagging and Bragging Algorithm (Comparative view) T Using bootstrapping create multiple training sets T1 T2 T3 TB-1 TB Create multiple version of the baseclassifiers O1 O2 O3 OB-1 OB The out-of bag samples C1 C2 C3 CB-1 CB w2 w1 w2 w2 w1 w2 w1 w2 Majority voting w1 w2 w1 w1 w1 w2 Classifier outputs Median Standard Bagging Procedure Bragging Procedure CCOM 6

  7. Nice Bagging • Nice bagging algorithm was proposed by Skurichina and Duin in 1998. • They proposed to • Use a validation set(tuning set) to validate the classifiers before combining. • They used the bootstrapped training sets for the validation. • They selected only the, “nice” classifiers ( classifiers having • misclassification error less than APER ( Apparent Error). • They combine the coefficients of the linear classifiers using the, “average” rule. 7

  8. Robust Bagging (Robag) • In the Robag algorithm • We used the Out-of-Bag(OOB) samples as the validation set as • for validation it is better to use a data set which is independent • of the training set ( the OOB is independent of the bootstrap • training samples). • We also used, “Median” as the combiner because in that case • any extreme results yield by any classifiers will automatically • be filtered out. 8

  9. Robust Bagging (Robag) contd’ T Using bootstrapping create multiple training sets T1 T2 T3 TB-1 TB O1 O2 O3 OB-1 OB The out-of bag samples Create multiple version of the base classifiers C1 C2 C3 CB-1 CB Validation Sets Validation Process Using OOB samples Validated Classifiers V1 V2 V3 VB-1 VB Classifier Outputs Combining Classifier Outputs Using Median to combine the classifier outputs Combined Classifier CCOM 9

  10. Relative Improvement Measure (RIM) To check whether bagging or any other variant of bagging improves or degrades the performance of the base classifier we use here a relative improvement measure. Relative Improvement = Here , Errbase=Test Error of the base Classifier Errbagging=Test Error of the bagged Classifier So the RIM measures the decrease (increase) of the test error of the Bagged classifier over the base classifier. 10

  11. Classifiers and the data set used We used here an unstable classifier that is a tree classifier, CART and two stable classifier (linear) , i,e., Fisher Linear Discriminant (FLD)and Nearest Mean Classifier (NMC) . The stable classifiers are used to check the performance of the robag algorithm. As usually bagging algorithm do not perform well in case of stable classifiers. We use 5 of the well known UCI [4](University of California Irvine) Machine Learning Repository data sets. N= no. of observations q = no. of feautures 11

  12. Experimental Setup • In the experiments we • We divide all the datasets into 2 parts a training part (80% of the total data) • a testing part (20% of the data), randomly. • 2) We make this random partition 50 times . • In each partition • a) calculate the APER . • b) use 50 bootstrap replicates to generate bagged classifiers. • c) calculate the RIM. • 4) Average the results over the 50 iterations. 12

  13. Results of CART Table: Mean relative improvements in error rate of Bagging, Robust Bagging, Nice Bagging and Bragging with respect to a classification tree 13

  14. Results of FLD Table: Mean relative improvements in error rate of Bagging, Robust Bagging, Nice Bagging and Bragging with respect to a FLD 14

  15. Results of NMC Table: Mean relative improvements in error rate of Bagging, Robust Bagging, Nice Bagging and Bragging with respect to a NMC 15

  16. Conclusion • We see from the results that the Robag algorithm, • 1) Performed nearly similarto bagging and it’s variants when applied to CART in 2 datasets , performed better in 1 dataset. • Performed well in 4 datasets when applied to FLD. • Performed well in 4 datasets when applied with NMC. So, we can say that the Robag algorithm when applied with Linear Classifiers performed better than other Bagging variants. 16

  17. References [1] L. Breiman, “Bagging Predictors”, Machine Learning 24, 1996, pp.123-140. [2] P. Bühlman, “Bagging, subbagging and bragging for improving some prediction algorithms”, in Recent Advances and Trends in Nonparametric Statistics, M.G. Arkitas, and D. N. Politis(Eds), Elsevier, 2003, pp. 9-34. [3] M. Skurichina, R. P.W. Duin, “Bagging for linear classifiers”, Pattern Recognition, 31, 1998, pp. 909-930. [4] http://www.ics.uci.edu/~mlearn/MLRepository.html. 17

More Related