1 / 57

Object Recognizing

This article discusses the use of fragments and features in object recognition and classification. It covers topics such as selecting informative object components, fragment-based classification, variability of airplane features, classifiers such as SVM, and the use of HoG descriptors and part models.

pell
Download Presentation

Object Recognizing

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Object Recognizing

  2. Object Classes

  3. Individual Recognition

  4. Object parts Window Mirror Window Door knob Headlight Back wheel Bumper Front wheel Headlight

  5. Class Non-class

  6. Class Non-class

  7. Unsupervised Training Data

  8. Features and Classifiers Same features with different classifiers Same classifier with different features

  9. Generic Features Simple (wavelets) Complex (Geons)

  10. Class-specific Features: Common Building Blocks

  11. Mutual information H(C) F=0 F=1 H(C) when F=1 H(C) when F=0 I(C;F) = H(C) – H(C/F)

  12. Mutual Information I(C,F) Class: 1 1 0 1 0 1 0 0 Feature: 1 0 0 1 1 1 0 0 I(F,C) = H(C) – H(C|F)

  13. Optimal classification features • Theoretically: maximizing delivered information minimizes classification error • In practice: informative object components can be identified in training images

  14. Selecting Fragments

  15. Adding a New Fragment(max-min selection) MIΔ ? MI = MI [Δ ; class] - MI [ ; class ] Select: Maxi MinkΔMI (Fi, Fk) (Min. over existing fragments, Max. over the entire pool)

  16. Horse-class features Car-class features Pictorial features Learned from examples

  17. Star model Detected fragments ‘vote’ for the center location Find location with maximal vote In variations, a popular state-of-the art scheme

  18. Fragment-based Classification Ullman, Sali 1999 Agarwal, Roth 2002 Fergus, Perona, Zisserman 2003

  19. Variability of Airplanes Detected

  20. Recognition Features in the Brain

  21. Class-fragments and Activation Malach et al 2008

  22. EEG

  23. ERP MI 1 — MI 2 — MI 3 — MI 4 — MI 5 — Harel, Ullman,Epshtein, Bentin Vis Res 2007

  24. Bag of words

  25. Bag of visual words A large collection of image patches

  26. Generate a dictionary using K-means clustering

  27. – – Each class has its words historgram Limited or no Geometry Simple and popular, no longer state-of-the art.

  28. Classifiers

  29. SVM – linear separation in feature space

  30. Optimal Separation SVM Advantages of SVM: Optimal separation Extensions to the non-separable case: Kernel SVM Find a separating plane such that the closest points are as far as possible

  31. +1 The Margin -1 0 Separating line: w ∙ x + b = 0 Far line: w ∙ x + b = +1 Their distance: w ∙ ∆x = +1 Separation: |∆x| = 1/|w| Margin: 2/|w|

  32. Max Margin Classification The examples are vectors xi The labels yi are +1 for class, -1 for non-class (Equivalently, usually used How to solve such constraint optimization?

  33. Using Lagrange multipliers: Using Lagrange multipliers: Minimize LP = With αi > 0 the Lagrange multipliers

  34. Minimizing the Lagrangian Minimize Lp : Set all derivatives to 0: Also for the derivative w.r.t. αi Dual formulation: Maximize the Lagrangian w.r.t. the αi and the above two conditions.

  35. Solved in ‘dual’ formulation Maximize w.r.t αi : With the conditions: Put into Lp W will drop out of the expression

  36. Dual formulation Mathematically equivalent formulation: Can maximize the Lagrangian with respect to the αi After manipulations – concise matrix form:

  37. SVM: in simple matrix form We first find the α. From this we can find: w, b, and the support vectors. The matrix H is a simple ‘data matrix’: Hij = yiyj <xi∙xj> Final classification: w∙x + b ∑αi yi <xi x> + b Because w = ∑αi yi xi Only <xi x> with support vectors are used

  38. DPM Felzenszwalb • Felzenszwalb, McAllester, Ramanan CVPR 2008. A Discriminatively Trained, Multiscale, Deformable Part Model • Many implementation details, will describe the main points.

  39. HoG descriptor

  40. HoG Descriptor Dallal, N & Triggs, B. Histograms of Oriented Gradients for Human Detection

  41. Using patches with HoG descriptors and classification by SVM Person model: HoG

  42. Object model using HoG A bicycle and its ‘root filter’ The root filter is a patch of HoG descriptor Image is partitioned into 8x8 pixel cells In each block we compute a histogram of gradient orientations

  43. Dealing with scale: multi-scale analysis The filter is searched on a pyramid of HoG descriptors, to deal with unknown scale

  44. Adding Parts A part Pi = (Fi, vi, si, ai, bi). Fi is filter for the i-th part, vi is the center for a box of possible positions for part i relative to the root position, si the size of this box ai and bi are two-dimensional vectors specifying coefficients of a quadratic function measuring a score for each possible placement of the i-th part. That is, ai and bi are two numbers each, and the penalty for deviation ∆x, ∆y from the expected location is a1 ∆x + a2 ∆y+ b1 ∆x2 + b2 ∆y2

  45. Bicycle model: root, parts, spatial map Person model

More Related