1 / 23

Computer Vision

Computer Vision. Spring 2012 15-385,-685 Instructor: S. Narasimhan WH 5409 T-R 10:30am – 11:50am Lecture #2 3. Classification and SVM. Credits: Guru Krishnan and Shree Nayar Eric P. Xing. Classification (Supervised Learning). Data:. X = {X 1 , X 2 , … , X n }. Label:.

rhian
Download Presentation

Computer Vision

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Computer Vision Spring 2012 15-385,-685 Instructor: S. Narasimhan WH 5409 T-R 10:30am – 11:50am Lecture #23

  2. Classification and SVM Credits: Guru Krishnan and Shree Nayar Eric P. Xing

  3. Classification (Supervised Learning) Data: X = {X1, X2, … , Xn} Label: Y = {y1, y2, … , yn}, yi  {-1,1} Test: Xt -1 or 1? f: X  Y Classifier: X = Y = 1 1 1 -1 -1 -1 Face or not? Xt

  4. Classification and Computer Vision Handwritten characters Face Scene classification Object recognition

  5. Classification and Computer Vision Pedestrian detection Yes / No

  6. Procedure of Classification (1) Gather positive / negative training data (2) Feature Extraction Y = 1 1 1 -1 -1 -1 X = Very challenging in reality…  Rn

  7. Feature Extraction  Rn Binary codes are available ! Texturefilters SIFT Color histogram Haarfilter Pixel

  8. Feature Space Let the n-dimensional feature vector F be a point an n-D space Rn(Feature space) f2 ° ° ° ° ° ° f1 fN Training Dataof Face Training Dataof Non-Face (3) Learn classifier!

  9. Nearest Neighbor Classifier Nearest samples decide the result of the classifier. f2 ° ° ° ° ° ° f1 fN Training Dataof Face Training Dataof Non-Face Test Image

  10. Nearest Neighbor Classifier Nearest samples decide the result of the classifier. f2 ° ° ° ° ° ° f1 fN Training Dataof Face Training Dataof Non-Face Face

  11. Nearest Neighbor Classifier Nearest samples decide the result of the classifier. f2 ° ° ° ° ° ° f1 fN Training Dataof Face Training Dataof Non-Face Not Face

  12. Nearest Neighbor Classifier Larger the training set, more robust the NN classifier f2 ° ° ° ° ° ° f1 fN Training Dataof Face Training Dataof Non-Face False Positive

  13. Nearest Neighbor Classifier Larger the training set, slower the NN classifier f2 ° ° ° ° ° ° f1 fN Training Dataof Face Training Dataof Non-Face

  14. Decision Boundary A simple decision boundary separating the face and non-face classes will suffice. f2 ° ° ° ° ° ° f1 fN Training Dataof Face Training Dataof Non-Face

  15. Decision Boundary Find Decision Boundary in feature space Decision Boundary WTF+b=0 ° ° ° ° Faces ° ° ° WTF+b>0 ° ° ° ° Non-Faces WTF+b<0

  16. Decision Boundary How to find the optimal decision boundary? ° ° ° ° Face Class ° ° ° ° ° ° ° Non-Face Class

  17. Evaluating a Decision Boundary Margin ° ° ° ° ° ° ° ° ° ° ° Margin or Safe Zone: The width that the boundary could be increased by before hitting a data point.

  18. Evaluating a Decision Boundary Margin II Margin I + + ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° Decision I: Face Decision II: Non-Face Choose Decision Boundary with the Maximum Margin!

  19. Support Vector Machine (SVM) Classifier optimized to maximize Margin Margin ° ° ° ° ° ° ° ° ° ° ° Support Vectors: Closest data samples to the boundary. Decision Boundary and the Margin depend only on the Support Vectors.

  20. Finding Decision Boundary ρ -ρ Distance between a point x to a plan WTF+b=0 ° WTF+b=c ° ° ° ° ° wTxs + b 2c c ρ= ρ= ||w|| ||w|| ||w|| wTxs + b=c WTF+b=-c where WTF+b=0 wTx + b max (Hessian normal form) d= ||w|| w wTxi + b wTxi + b -c c s.t ≥ ≤ for all Xi with yi=1 for all Xi with yi=-1 ||w|| ||w|| ||w|| ||w||

  21. Founding Decision Boundary 2c 2c ||w|| ||w|| yi yi min max max ||w|| w w w Learning classifier Compute w by solving the QP wTxi + b wTxi + b wTxi + b -c c c s.t s.t s.t (wTxi + b) 1 ≥ ≥ ≥ ≤ for all Xi for all Xi for all Xi with yi=1 for all Xi with yi=-1 ||w|| ||w|| ||w|| ||w|| ||w|| ||w||

  22. Kernel Trick How about these points in 1-D? ° ° ° ° ° ° ° ° ° ° 0 0 ° No! ° Linearly separable Think about a quadratic mapping Φ(x)=x2. Yes!

  23. Kernel Trick Φ(x) ° ° ° ° ° ° ° ° ° ° ° ° ° ° Mathematically hard to describe! ° °

More Related