1 / 32

Lecture: Deep Convolutional Neural Networks

Lecture: Deep Convolutional Neural Networks. Shubhang Desai Stanford Vision and Learning Lab. Today’s agenda. D eep convolutional networks History of CNNs CNN dev Architecture search. Previously …. 32x32x10 Conv Block. Classification Output. Feature Extractor. Prediction. Classifier.

plugo
Download Presentation

Lecture: Deep Convolutional Neural Networks

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Lecture: Deep Convolutional Neural Networks Shubhang Desai Stanford Vision and Learning Lab

  2. Today’s agenda • Deep convolutional networks • History of CNNs • CNN dev • Architecture search

  3. Previously… 32x32x10 Conv Block Classification Output FeatureExtractor Prediction Classifier Input Image Loss Function Input Label Loss Value

  4. Previously… 32x32x10 Conv Block Classification Output FeatureExtractor Prediction Classifier Input Image 1) Minimize this… Loss Function Input Label Loss Value

  5. Previously… 32x32x10 Conv Block Classification Output FeatureExtractor Prediction Classifier Input Image 2) By modifying this… 1) Minimize this… Loss Function Input Label Loss Value

  6. Previously… 32x32x10 Conv Block Classification Output FeatureExtractor Prediction Classifier Input Image 2) By modifying this… 1) Minimize this… Loss Function Input Label Loss Value 3) Using gradient descent!

  7. Previously… Why only one convolution? 32x32x10 Conv Block Classification Output FeatureExtractor Prediction Classifier Input Image 2) By modifying this… 1) Minimize this… Loss Function Input Label Loss Value 3) Using gradient descent!

  8. Convolutions Convolutions = Insights More Convolutions = More Insights?

  9. Recall Hubel and Weisel…

  10. Recall Hubel and Weisel… The edges can be grouped into triangles and ovals… It’s a mouse toy! The thing has edges… The triangles are ears, the oval is a body…

  11. Recall Hubel and Weisel… The edges can be grouped into triangles and ovals… It’s a mouse toy! The thing has edges… The triangles are ears, the oval is a body…

  12. Convolutions Across Channels Image Filter Output

  13. Convolutions Across Channels Image Filter Output

  14. Convolutions Across Channels more output channels = more filters = more features we can learn! Image Filter Output

  15. Convolutions Across Channels Conv Block

  16. Stacking Convolutions Conv Block Conv Block Conv Block Conv Block Output Output Output Input Output

  17. Stacking Convolutions Conv Block Conv Block Conv Block Conv Block CONVOLUTIONAL NEURAL NETWORK! Output Output Output Input Output

  18. Convolutional Neural Networks (ConvNets) • Neural networks which involve the stacking of multiple convolutional layers to produce output • Often times end in fully-connected layers as the “classifier”

  19. History of ConvNets LeNet – 1998

  20. History of ConvNets AlexNet – 2012

  21. History of ConvNets NiN – 2013

  22. History of ConvNets Inception Network – 2015

  23. Why Do They Work So Well?

  24. Why Do They Work So Well?

  25. Why Do They Work So Well?

  26. Why Do They Work So Well?

  27. Why Do They Work So Well? This is the neural network’s “receptive field”—it’s able to see!

  28. Great Applications of ConvNets “Staffordshire Bull Terrier” “Ranjay Krishna”

  29. What is CNN Dev? • Define the objective • What is the input/output? • What is the loss/objective function? • Create the architecture • How many conv layers? • What size are the convolutions? • How many fully-connected layers? • Define hyperparameters • What is the learning rate? • Train and evaluate • How did we do? • How can we do better?

  30. What is CNN Dev? • Define the objective • What is the input/output? • What is the loss/objective function? • Create the architecture • How many conv layers? • What size are the convolutions? • How many fully-connected layers? • Define hyperparameters • What is the learning rate? • Train and evaluate • How did we do? • How can we do better? Can this be automated?

  31. Neural Architecture Search Automatically finds the best architecture for a given task Before we had to find best featurizer for a fixed classifier—now we find the best classifier and featurizer in tandem!

  32. In summary… We can use convolutions as a basis to build powerful visual systems We can leverage deep learning to automatically learn the best ways to do previously difficult tasks in computer vision Still lots of open questions! If you’re interested in machine learning and/or deep learning, take: • Machine Learning (CS 229) • Deep Learning (CS 230) • NLP with Deep Learning (CS 224n) • Convolutional Neural Networks (CS 231n)

More Related