1 / 23

A Family of Online Boosting Algorithms

A Family of Online Boosting Algorithms. Boris Babenko 1 , Ming-Hsuan Yang 2 , Serge Belongie 1 1. University of California, San Diego 2. University of California, Merced OLCV, Kyoto, Japan. Motivation. Extending online boosting beyond supervised learning

saki
Download Presentation

A Family of Online Boosting Algorithms

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. A Family of Online Boosting Algorithms Boris Babenko1, Ming-Hsuan Yang2, Serge Belongie1 1. University of California, San Diego 2. University of California, Merced OLCV, Kyoto, Japan

  2. Motivation • Extending online boosting beyond supervised learning • Some algorithms exist (i.e. MIL, Semi-Supervised), but would like a single framework [Oza ‘01, Grabner et al. ‘06, Grabner et al. ‘08, Babenko et al. ‘09]

  3. Boosting Review • Goal: learn a strong classifierwhere is a weak classifier, and is the learned parameter vector

  4. Greedy Optimization • Have some loss function • Have • Find next weak classifier:

  5. Gradient Descent Review • Find some parameter vector that optimizes loss

  6. Stochastic Gradient Descent • If loss over entire training data can be split into sum of loss per training examplecan use the following update:

  7. Batch Stochastic Boosting (BSB) • Recall, we want to solve • What if we use stochastic gradient descent to find ?

  8. Batch Stochastic Boosting (BSB)

  9. Batch Stochastic Boosting (BSB)

  10. Online Stochastic Boosting (OSB)

  11. Online Boosting Algorithms • For any differentiable loss function, can derive boosting algorithm…

  12. Online Stochastic Boosting (OSB)

  13. Online Boosting for Regression • Loss: • Update rule:

  14. Multiple Instance Learning (MIL)Review • Training data: bags of instances and bag labels • Bag is positive if at least one member is positive

  15. Online Boosting for Multiple Instance Learning (MIL) • Loss:where [Viola et al. ‘05]

  16. Online Boosting for Multiple Instance Learning (MIL) • Update rule:

  17. Results • So far, only empirical results • Compare • OSB • BSB • standard batch boosting algorithm • Linear & non-linear model trained with stochastic gradient descent (BSB with M=1)

  18. Binary Classification [LeCun et al. 98, Kanade et al. ‘00, Huang et al. ‘07

  19. Regression [UCI Repository, Ranganathan et al. ‘08]

  20. Multiple Instance Learning LeCun et al. ‘97, Andrews et al ‘02

  21. Comparing to Previous Work • Friedman’s “Gradient Boosting” framework = gradient descent in function space • OSB = gradient descent in parameter space • Similar to Neural Net methods (i.e. Ash et al. ‘89)

  22. Discussion • Advantages: • Easy to derive new Online Boosting algorithms for various problems / loss functions • Easy to implement • Disadvantages: • No theoretic guarantees yet • Restricted class of weak learners

  23. Thanks! • Research supported by: • NSF CAREER Grant #0448615 • NSF IGERT Grant DGE- 0333451 • ONR MURI Grant #N00014-08-1-0638

More Related