1 / 42

Lecture 5-6 Object Detection – Boosting

Lecture 5-6 Object Detection – Boosting. Tae- Kyun Kim. Face Detection Demo. Robust real-time object detector, Viola and Jones, CVPR 01 Implemented by Intel OpenCV. Multiclass object detection [ Torralba et al PAMI 07].

Download Presentation

Lecture 5-6 Object Detection – Boosting

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Lecture 5-6Object Detection – Boosting Tae-Kyun Kim

  2. Face Detection Demo Robust real-time object detector, Viola and Jones, CVPR 01 Implemented by Intel OpenCV

  3. Multiclass object detection[Torralbaet al PAMI 07] A boosting algorithm, originally for binary class problems, has been extended to multi-class problems.

  4. Object Detection Input is a single image, given without any prior knowledge.

  5. Object Detection Output is a set of tight bounding boxes (positions and scales) of instances of a target object class (e.g. pedestrian).

  6. Object Detection We scan every scale and pixel location of an image.

  7. Number of windows It ends up with a huge number of candidate sub-windows . x … # of scales # of pixels Number of Windows: 747,666

  8. Time per window What amount of time are we given to process a single scanning window? SIFT or raw pixels … …… dimension D Num of feature vectors: 747,666 Classification:

  9. Time per window or raw pixels … …… dimension D Num of feature vectors: 747,666 In order to finish the task in 1 sec Neural Network? Nonlinear SVM? Time per window (or vector): 0.00000134 sec

  10. Examples of face detection From Viola, Jones, 2001

  11. More traditionally…The search space is narrowed down • By Integrating Visual Cues [Darrell et al IJCV 00]. • Face pattern detection output (left). • Connected components recovered from stereo range data (mid). • Flesh hue regions from skin hue classification (right).

  12. Since about 2001 (Viola &Jones 01)… “BoostingSimple Features” has been a dominating art. • Adaboost classification • Weak classifiers: Haar-basis like functions (45,396 (>>T) in total feature pool) Strong classifier Weak classifier

  13. Introduction to Boosting Classifiers- AdaBoost (Adaptive Boosting)

  14. Sorry for inconsistent notations…

  15. Boosting

  16. Boosting

  17. Principally, Boosting does • Iteratively reweighting training samples, • Assigning higher weights to previously misclassified samples. 50 rounds 2 rounds 3 rounds 4 rounds 5 rounds 1 round

  18. In the previous example, We consider all horizontal or vertical lines. For each line, we define two weaklearners. +1 -1 -1 +1

  19. AdaBoost M : # of weak classifiers to choose , among all weak classifiers,

  20. Boosting

  21. Boosting as an optimisation framework

  22. Minimising Exponential Error

  23. Existence of weak learners • Definition of a baseline learner • Data weights: • Set • Baseline classifier: for all x • Error is at most ½. • Each weak learner in Boosting is demanded s.t. → Error of the composite hypothesis goes to zero as boosting rounds increase [Duffy et al 00].

  24. Robust real-time object detector

  25. Boosting Simple Features [Viola and Jones CVPR 01] • Adaboost classification • Weak classifiers: Haar-basis like functions (45,396 in total feature pool) Strong classifier Weak classifier …… 20 20

  26. Learning (concept illustration) …… Non-face images Resize to 20x20 D=400 Face images Output: weaklearners

  27. Evaluation (testing) we apply the boosting classifier to every scan-window. For given Non-local maxima supression is performed. Non-local maxima suppression From Viola, Jones, 2001

  28. How to accelerate boosting training and evaluation

  29. Integral Image • A value at (x,y) is the sum of the pixel values above and to the left of (x,y). • The integral image can be computed in one pass over the original image.

  30. Boosting Simple Features [Viola and Jones CVPR 01] • Integral image • The sum of original image values within the rectangle can be computed: Sum = A-B-C+D • This provides the fast evaluation of Haar-basis like features

  31. Evaluation (testing) x y ii(x,y) From Viola, Jones, 2001

  32. Boosting as a Tree-structured Classifier

  33. Boosting (very shallow network) • The strong classifier H as boosted decision stumps has a flat structure • Cf. Decision “ferns” has been shown to outperform “trees” [Zisserman et al, 07] [Fua et al, 07] x …… …… c0 c1 c0 c1 c0 c1 c0 c1 c0 c1 c0 c1

  34. Boosting -continued • Good generalisation is achieved by a flat structure. • It provides fast evaluation. • It does sequential optimisation. A strong boosting classifier A strong boosting classifier • Boosting Cascade [viola & Jones 04], Boosting chain [Xiao et al] • It is veryimbalanced tree structured. • It speeds up evaluation by rejecting easy negative samples at early stages. • It is hard to design T = 2 5 10 20 50 100 ……

  35. A cascade of classifiers • The detection system requires good detection rate and extremely low false positive rates. • False positive rate and detection rate are f_i is the false positive rate of i-th classifier on the examples that get through to it. • The expected number of features evaluated is p_j is the proportion of windows input to i-th classifier.

  36. Demo video: Fast evaluation

  37. Object Detection by a Cascade of Classifiers It speeds up object detection by coarse-to-fine search. Pictures from Romdhani et al. ICCV01

More Related