1 / 46

Face Detection using the Spectral Histogram representation

Face Detection using the Spectral Histogram representation. By: Christopher Waring, Xiuwen Liu Department of Computer Science Florida State University. Presented by: Tal Blum blum+@cs.cmu.edu. Sources. The presentation is based on a few resources by the authors:

winfred
Download Presentation

Face Detection using the Spectral Histogram representation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Face Detection using the Spectral Histogram representation By: Christopher Waring, Xiuwen Liu Department of Computer Science Florida State University Presented by: Tal Blum blum+@cs.cmu.edu

  2. Sources • The presentation is based on a few resources by the authors: • Exploration of the Spectral Histogram for Face Detection – M.Sc thesis by Christopher Waring (2002) • Spectral Histogram Based Face Detection – IEEE (2003) • Rotation Invariant Face Detection Using Spectral Histograms & SVM – CVPR submission • Independent Spectral Representation of images for Recognition – Optical Society of America (2003)

  3. Overview • Spectral Histogram • Overview of Gibbs Sampling + Simulated annealing • Method for Lighting Normalization • Data used • 3 Algorithms • SH + Neural Networks • SH + SVM • Rotation Invariant SH +SVM • Experimental Results • Conclusions & Discussions

  4. Two Approaches to Object Detection • Curse of dimensionality • Features should be: (Vasconcelos) • Independent • have low Bayes Error • 2 main Approaches in Object Detection: • Complicated Features with many interactions • Require many data points • Use syntactic variations that mimic the real variations • Estimation Error might be high • Assuming Model or Parameter structure • Small set of features or small number of values • This is the case for Spectral Histograms • The Bayes Error might be high (Vasconcelos) • Estimation Error is low

  5. Why Spectral Histograms? • Translation Invariant • Therefore insensitive to incorrect alignment. • (surprisingly) seem to be able to separate Objects from Non-Objects well. • Good performance with a very small feature set. • Good performance with a large rotation invariance. • Don’t rely at all on any global spatial information • Combining of variant and invariant features • Will play a more Important role

  6. What is Spectral Histogram

  7. Types of Filters • 3 types of filters: • Gradient Filters • Gabor Filters • Laplasian of Gaussians Filters The exact composition of the filters is different for each algorithm.

  8. Gibbs Sampling+ Simulated Annealing • We want to sample from • We can use the induced Gibbs Distribution • Algorithm: • Repeat • Randomly pick a location • Change the pixel value according to q • Until for every filter

  9. Face Synthesis usingGibbs Sampling + Simulated Annealing • A measure of the quality of the Representation

  10. Comparison - PCA vs. Spectral Histogram

  11. Reconstruction vs. Sampling Reconstruction sampling

  12. Spectral Histograms of several images

  13. Lighting correction • They use a 21x21 sized images • Minimal brightness plane of 3x3 is computed from each 7x7 block • A 21x21 correction plane is computed by bi-linear interpolation • Histogram Normalization is applied

  14. Lighting correction

  15. Detection & Post Processing • Detection is don on 3 scaled Gaussian pyramid, each scale down sampled by1.1 • detections within 3 pixels are merged • A detection is marked as final if it is found at at least two concurrent levels • A detection counts as correct if at least half of the face lies within the detection window

  16. Adaptive Threshold

  17. Algorithm Iusing a Neural Network • Neural Network was used as a classifier • Training with back propagation • Data Processing • 1500 Face images & 8000 Non-Face images • Bootstrapping was used to limit the # non faces (Sung Poggio) leaving 800 Non-Faces • Use 8 filters with 80 bins in each

  18. Alg. I - Filter Selection • 7 LoG filters with • 4 Difference of gradient: Dx Dy Dxx Dyy • 70 Gabor filters with: • T = 2,4,6,8,10,12,14 • = 0,40,80,120,160,200,280,320 • Selected Filters (8 out of 81) • 4 LoG filters with: • 3 Difference of Gradiant: Dx Dxx & Dyy • 1 Gabor filter with T=2 and

  19. Spectral Histograms of several images

  20. Algorithm I – Resultson CMU test set I

  21. Algorithm I – Resultson CMU test set II

  22. Algorithm IIusing a SVM • SVM instead of a Neural Network • They use more filters • 34 filters (instead of 7) • 359 bins (instead of 80) • 4500 randomly rotated Face images & 8000 Non-Face images from before

  23. Algorithm II (SVM)Filters • The filters were hand picked • Filters: • The Intensity filter • 4 Difference of Gradient filters Dx,Dy,Dxx &Dyy • 5 LoG filgers • 24 gabor filters with • Local & Global Constraints • Using Histograms as features

  24. Spectral Histograms of several images

  25. Algorithm II (SVM) Results

  26. Old Results

  27. Algorithm IIIusing SVM +rotation invariant features • Same features as in Alg. II • The Features enable 180 degrees of rotation invariance • Rotate the image 180 degrees and switchHistograms achieving 360 degrees invariance

  28. Rotating 180 degrees

  29. Combining the two classifiers

  30. ResultsUpright test sets

  31. ResultsRotated test sets

  32. Rotation Invariation Results

  33. More pictures

  34. Conclusions • A system which is rotation & translation invariant • Achieves very high accuracy for frontal faces and rotated frontal faces • The system is not real time, but is possible to implement convolution in hardware • Uses limited amount of data • Accuracy as a function of efficiency

  35. Conclusions (2) • Faces are identifiable through local spatial dependencies where the global ones can be globally modeled as histograms • The problem with spatial methods is the estimation of the parameters • The SH representation is independent of classifier choice • SVM outperforms Neural Networks • The Problems and the Errors of this system are considerably different than of other systems

  36. Conclusions (3) • Localization in Space and Scale is not as good as other methods • Translation Invariant features can enable a coarser sampling the image • Use adaptive thresholding • Use several scales to improve performance • SH can be used for sampling of objects

More Related