1 / 19

Object Recognition with Features Inspired by Visual Cortex T. Serre, L. Wolf, T. Poggio

Object Recognition with Features Inspired by Visual Cortex T. Serre, L. Wolf, T. Poggio. Presented by Andrew C. Gallagher Jan. 25, 2007. Overview. Motivation Biological Model The Features Results Conclusions. Motivation.

walther
Download Presentation

Object Recognition with Features Inspired by Visual Cortex T. Serre, L. Wolf, T. Poggio

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Object Recognition with Features Inspired by Visual CortexT. Serre, L. Wolf, T. Poggio Presented by Andrew C. Gallagher Jan. 25, 2007

  2. Overview • Motivation • Biological Model • The Features • Results • Conclusions

  3. Motivation • We know the human visual system works. So, let’s try to build a recognition system that is modeled after the HVS.

  4. Recognition Background • Template-Based Methods- lack robustness to object transformation. • Histogram-Based Descriptors (e.g. SIFT) have so much flexibility that discrimination can be degraded. • This paper introduces new features (inspired by the human visual system) that exhibit a better trade-off between invariance and selectivity.

  5. Biological Model

  6. Biological Model

  7. Biological Model

  8. The Feature Algorithm • The Primary Visual Cortex (V1) contains simple (S1) and complex (C1) cells. • S1: Apply a battery of Gabor Filters. • C1: Take a maximum over scales.

  9. S1 Simple Cells • S1: Apply a battery of Gabor Filters, varying in size and orientation.

  10. C1 Complex Cells • C1: Take a maximum over scales (within each band) over each of the four orientations, providing robustness to scale and translation.

  11. S2 and C2 • A set of patches (n x n x 4 orientations) is randomly created. N = {4,8,12, or 16}. • In S2, the stored patches are correlated with the C1 layers. This image has 1 plane per band, but no longer has different orientations (as it describes similarity across the orientations.) Basically, S2 is a Euclidean distance from an image patch C1 to a learned, labeled patch P. (About ~1000 randomly selected, unsupervised, patches are used). • In C2, the max over positions and scales from the S2 map is found.

  12. The Features

  13. The Features

  14. Another Feature Summary

  15. Experimental Results • Tested with images that either contain or do not contain a single instance of the target. • The system must decide if the object is present. • Datasets: MIT-CBCL, Caltech.

  16. Experimental Results Results on 50 positive and 50 negative examples.

  17. More Experimental Results

  18. Learned Features

  19. Conclusions • Biologically motivated features are extracted and then used for classification. • Based on a feedforward model of object recognition in the cortex. • Performance is excellent.

More Related