1 / 39

Presenter: Derek Hoiem CS 598, Spring 2009 Feb 24, 2009

Statistical Template-Based Object Detection A Statistical Method for 3D Object Detection Applied to Faces and Cars Henry Schneiderman and Takeo Kanade Rapid Object Detection using a Boosted Cascade of Simple Features Paul Viola and Michael Jones. Presenter: Derek Hoiem CS 598, Spring 2009

doyle
Download Presentation

Presenter: Derek Hoiem CS 598, Spring 2009 Feb 24, 2009

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Statistical Template-Based Object DetectionA Statistical Method for 3D Object Detection Applied to Faces and CarsHenry Schneiderman and Takeo KanadeRapid Object Detection using a Boosted Cascade of Simple FeaturesPaul Viola and Michael Jones Presenter: Derek Hoiem CS 598, Spring 2009 Feb 24, 2009 Some slides/figures from www.cs.cmu.edu/~efros/courses/AP06/presentations/Schneiderman-Kanade%20Viola-Jones%20presentation.ppt

  2. Goal: Detect all instances of objects

  3. Influential Works in Detection • Sung-Poggio (1994, 1998) : ~1260 • Basic idea of statistical template detection (I think), bootstrapping to get “face-like” negative examples, multiple whole-face prototypes (in 1994) • Rowley-Baluja-Kanade (1996-1998) : ~2700 citations • “Parts” at fixed position, non-maxima suppression, simple cascade, rotation, pretty good accuracy, fast • Schneiderman-Kanade (1998-2000,2004) : ~1150 • Careful feature engineering, excellent results, cascade • Viola-Jones (2001, 2004) : ~4400 • Haar-like features, Adaboost as feature selection, very fast, easy to implement • Dalal-Triggs (2005) : ~400 • Careful feature engineering, excellent results, HOG feature, online code • Felzenszwalb-McAllester-Ramanan (2008)? 8 citations • Excellent template/parts-based blend

  4. Sliding window detection … …

  5. What the Detector Sees

  6. Statistical Template • Object model = log linear model of parts at fixed positions ? +3 +2 -2 -1 -2.5 = -0.5 > 7.5 Non-object ? +4 +1 +0.5 +3 +0.5 = 10.5 > 7.5 Object

  7. Design challenges • Part design • How to model appearance • Which “parts” to include • How to set part likelihoods • How to make it fast • How to deal with different viewpoints • Implementation details • Window size • Aspect ratio • Translation/scale step size • Non-maxima suppression

  8. Schneiderman and Kanade

  9. Parts model • Part = group of wavelet coefficients that are statistically dependent

  10. Parts: groups of wavelet coefficients • Fixed parts within/across subbands • 17 types of parts • Discretize wavelet coefficient to 3 values • E.g., part with 8 coefficients has 3^8 = 6561 values

  11. Part Likelihood • Class-conditional likelihood ratio • Estimate P(part|object) and P(part | non-object) by counting over examples • Adaboost tunes weights discriminatively

  12. Training • Create training data • Get positive and negative patches • Pre-process (optional), compute wavelet coefficients, discretize • Compute parts values • Learn statistics • Compute ratios of histograms by counting for positive and negative examples • Reweight examples using Adaboost, recount, etc. More on this later

  13. Training multiple viewpoints Train new detector for each viewpoint.

  14. Testing • Processing: • Lighting correction (optional) • Compute wavelet coefficients, quantize • Slide window over each position/scale (2 pixels, 21/4 scale) • Compute part values • Lookup likelihood ratios • Sum over parts • Threshold • Use faster classifier to prune patches (cascade)… more on this later • Non-maximum suppression

  15. Results: faces 208 images with 441 faces, 347 in profile

  16. Results: cars

  17. Results: faces today

  18. Viola and Jones Fast detection through two mechanisms

  19. Integral Images • “Haar-like features” • Differences of sums of intensity • Millions, computed at various positions and scales within detection window -1 +1 Two-rectangle features Three-rectangle features Etc.

  20. Integral Images • ii = cumsum(cumsum(Im, 1), 2) x, y ii(x,y) = Sum of the values in the grey region How to compute B-A? How to compute A+D-B-C?

  21. Adaboost as feature selection • Create a large pool of parts (180K) • “Weak learner” = feature + threshold + parity • Choose weak learner that minimizes error on the weighted training set • Reweight

  22. Sidebar: Adaboost

  23. Adaboost

  24. Adaboost “RealBoost” Important special case: ht partitions input space: alphat Figure from Friedman et al. 1999

  25. Adaboost: Immune to Overfitting? Test error Train error

  26. Interpretations of Adaboost • Additive logistic regression (Friedman et al. 2000) • LogitBoost from Collins et al. 2002 does this more explicitly • Margin maximization (Schapire et al. 1998) • Ratch and Warmuth 2002 do this more explicitly

  27. Adaboost: Margin Maximizer Test error Train error margin

  28. Interpretations of Adaboost • Rosset Zhu Hastie 2004 • Early stopping is form of L1-regularization • In many cases, converges to “L1-optimal” separating hyperplane • “An interesting fundamental similarity between boosting and kernel support vector machines emerges, as both can be described as methods for regularized optimization in high-dimensional predictor space, utilizing a computational trick to make the calculation practical, and converging to margin-maximizing solutions.”

  29. Back to recognition

  30. Cascade for Fast Detection • Choose threshold for low false negative rate • Fast classifiers early in cascade • Slow classifiers later, but most examples don’t get there Yes Yes Stage 1 H1(x) > t1? Stage N HN(x) > tN? Stage 2 H2(x) > t2? … Pass No No No Examples Reject Reject Reject

  31. Viola-Jones details • 38 stages with 1, 10, 25, 50 … features • 6061 total used out of 180K candidates • 10 features evaluated on average • Examples • 4916 positive examples • 10000 negative examples collected after each stage • Scanning • Scale detector rather than image • Scale steps = 1.25, Translation s to 1.5s • Non-max suppression: average coordinates of overlapping boxes • Train 3 classifiers and take vote

  32. Viola Jones Results MIT + CMU face dataset

  33. Schneiderman later results Schneiderman 2004 Viola-Jones 2001 Roth et al. 1999 Schneiderman-Kanade 2000

  34. Speed • Schneiderman-Kanade: 1 minute for 3 viewpoints • Viola-Jones: 15 fps for frontal

  35. Important Ideas and Tricks • Excellent results require careful feature engineering • Speed = fast features (integral image) + cascade • Adaboost for feature selection • Bootstrapping to deal with many, many negative examples

  36. Occlusions? • A problem • Objects occluded by > 50% considered “don’t care” • PASCAL VOC changed this

  37. Strengths and Weaknesses of Statistical Template Approach Strengths • Works very well for non-deformable objects: faces, cars, upright pedestrians • Fast detection Weaknesses • Not so well for highly deformable objects • Not robust to occlusion • Requires lots of training data

  38. SK vs. VJ Schneiderman-Kanade • Wavelet features • Log linear model via boosted histogram ratios • Bootstrap training • Two-stage cascade • NMS: Remove overlapping weak boxes • Slow but very accurate Viola-Jones • Similar to Haar wavelets • Log linear model via boosted stubs • Bootstrap training • Multistage cascade, integrated into training • NMS: average coordinates of overlapping boxes • Less accurate but very fast

  39. BB discussion http://www.nicenet.org/ICA/class/conf_topic_show.cfm?topic_id=643240

More Related