1 / 52

Convolutional Restricted Boltzmann Machines for Feature Learning

Convolutional Restricted Boltzmann Machines for Feature Learning. Mohammad Norouzi Advisor: Dr. Greg Mori CS @ Simon Fraser University 27 Nov 2009. CRBMs for Feature Learning. Mohammad Norouzi Advisor: Dr. Greg Mori CS @ Simon Fraser University 27 Nov 2009. Problems. Human detection.

bassettj
Download Presentation

Convolutional Restricted Boltzmann Machines for Feature Learning

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Convolutional RestrictedBoltzmann Machines forFeature Learning Mohammad Norouzi Advisor: Dr. Greg Mori CS @ Simon Fraser University 27 Nov 2009

  2. CRBMs forFeature Learning Mohammad Norouzi Advisor: Dr. Greg Mori CS @ Simon Fraser University 27 Nov 2009

  3. Problems Human detection Handwritten digit classification

  4. Sliding Window Approach

  5. Sliding Window Approach (Cont’d) Decision Boundary [INRIA Person Dataset]

  6. Success or Failure of an object recognition algorithm hinges on the features used Input Feature representation Label ? HumanBackground Classifier Our Focus 0 / 1 / 2 / 3 / … Learning

  7. Local Feature Detector Hierarchies Larger More complicated Less frequent

  8. Generative & Layerwise Learning ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? GenerativeCRBM

  9. Visual Features: Filtering Filter Response Filter Kernel (Feature)

  10. Our approach to feature learningis generative ? ? ? Binary HiddenVariables (CRBM model)

  11. Related Work

  12. Related Work • Convolutional Neural Network (CNN) • Filtering layers are bundled with a classifier, and all the layers are learned together using error backpropagation. • Does not perform well on natural images • Biologically plausible models • Hand-crafted first layer vs. Randomly selected prototypes for second layer. Discriminative [Lecun et al. 98] [Ranzato et al. CVPR'07] No Learning [Serre et al., PAMI'07] [Mutch and Lowe, CVPR'06]

  13. Related Work (cont’d) • Deep Belief Net • A two layer partially observed MRF, called RBM, is the building block • Learning is performed unsupervised and layer-by- layer from bottom layer upwards • Our contributions: We incorporate spatial locality into RBMs and adapt the learning algorithm accordingly • We add more complicated components such as pooling and sparsity into deep belief nets [Hinton et al., NC'2006] Generative & Unsupervised

  14. Why Generative &Unsupervised • Discriminative learning of deep and large neural networks has not been successful • Requires large training sets • Easily gets over-fitted for large models • First layer gradients are relatively small • Alternative hybrid approach • Learn a large set of first layer features generatively • Switch to a discriminative model to select the discriminative features from those that are learned • Discriminative fine-tuning is helpful

  15. Details

  16. CRBM • Image is the visible layer and hidden layer is related to filter responses • An energy based probabilistic model Dot product of vectorized matrices

  17. Training CRBMs • Maximum likelihood learning of CRBMs is difficult • Contrastive Divergence (CD) learning is applicable • For CD learning we need to compute the conditionals and . data sample

  18. CRBM (Backward) • Nearby hidden variablescooperate in reconstruction • Conditional Probabilities take the form

  19. Learning the Hierarchy • The structure is trained bottom up and layerwise • The CRBM model for training filtering layers • Filtering layers are followed by down-sampling layers CRBM CRBM Classifier Pooling Pooling FilteringNon-linearity Reduce thedimensionality

  20. Responses Responses 1st Filters 2nd Filters Input 1 2 3 4

  21. Experiments

  22. Evaluation MNIST digit dataset INRIA person dataset Training set: 2416 person windows of size 128 x 64 pixels and 4.5x106 negative windows Test set: 1132 positive and 2x106 negative windows • Training set: 60,000 image of digits of size 28x28 • Test set: 10,000 images

  23. First layer filters • Gray-scale images of INRIA positive set • 15 filters of 7x7 • MNIST unlabeled digits • 15 filters of 5x5

  24. Second Layer Features (MNIST) • Hard to visualize the filters • We show patches highly responded to filters: 24

  25. Second Layer Features (INRIA)

  26. MNIST Results • MNIST error rate when model is trained on the full training set

  27. Results False Positive

  28. 1st

  29. 2nd

  30. 3rd

  31. 4th

  32. 5th

  33. INRIA Results • Adding our large-scale features significantly improves performance of the baseline (HOG)

  34. Conclusion • We extended the RBM model to Convolutional RBM, useful for domains with spatial locality • We exploited CRBMs to train local hierarchical feature detectors one layer at a time and generatively • This method obtained results comparable to state-of-the-art in digit classification and human detection

  35. Thank You 

  36. Hierarchical Feature Detector

  37. Contrastive Divergence Learning

  38. Training CRBMs (Cont'd) • The problem of reconstructing border region becomes severe when number of Gibbs sampling steps > 1. • Partition visible units into middle and border regions • Instead of maximizing thelikelihood, we (approximately)maximize

  39. Enforcing Feature Sparsity • The CRBM's representation is K (number of filters) times overcomplete • After a few CD learning iterations, V is perfectly reconstructed • Enforce sparsity to tackle this problem • Hidden bias terms were frozen at large negative values • Having a single non-sparse hidden unit improves the learned features • Might be related to the ergodicity condition

  40. Probabilistic Meaning of Max Max 1 1 2 3 4 1 2 2 1 2 1 3 4 5 6 3 4 5 6 2

  41. The Classifier Layer • We used SVM as our final classifier • RBF kernel for MNIST • Linear kernel for INRIA • For INRIA we combined our 4th layer outputs and HOG features • We experimentally observed that relaxing the sparsity of CRBM's hidden units yields better results • This lets the discriminative model to set the thresholds itself

  42. Why HOG features are added? • Because part-like features are very sparse • Having a template of the human figure helps a lot f

  43. RBM • Two layer pairwise MRF with a full setof hidden-visible connections • RBM Is an energy based model • Hidden random variables are binary, Visible variables can be binary or continuous • Inference is straightforward: and • Contrastive Divergence learning for training h w v

  44. Why Unsupervised Bottom-Up • Discriminative learning of deep structure has not been successful • Requires large training sets • Easily is over-fitted for large models • First layer gradients are relatively small • Alternative hybrid approach • Learn a large set of first layer features generatively • Later, switch to a discriminative model to select the discriminative features from those learned • Fine-tune the features using

  45. INRIA Results (Cont'd) • Missrate at different FPPW rates • FPPI is a better indicator of performance • More experiments on size of features and number of layers are desired

More Related