1 / 20

Learning Invariances and Hierarchies

Learning Invariances and Hierarchies . Pierre Baldi University of California, Irvine. Two Questions. “If we solve computer vision, we have pretty much solved AI.” A-NNs vs B-NNs and Deep Learning. If we solve computer vision…. If we solve computer vision….

dani
Download Presentation

Learning Invariances and Hierarchies

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Learning Invariances and Hierarchies Pierre Baldi University of California, Irvine

  2. Two Questions • “If we solve computer vision, we have pretty much solved AI.” • A-NNs vs B-NNs and Deep Learning.

  3. If we solve computer vision…

  4. If we solve computer vision… • If we solve computer audition,….

  5. If we solve computer vision… • If we solve computer audition,…. • If we solve computer olfaction,…

  6. If we solve computer vision… • If we solve computer audition,…. • If we solve computer olfaction,… • If we solve computer vision, how can we build computers that can prove Fermat’s last theorem?

  7. Invariances • Invariances in audition. We can recognize a tune invariantly with respect to: intensity, speed, tonality, harmonization, instrumentation, style, background. • Invariances in olfaction. We can recognize an odor invariantly with respect to: concentrations, humidity, pressure, winds, mixtures, background.

  8. Non-Invariances • Invariances evolution did not care about (although we are still evolving!...) • We cannot recognize faces upside down. • We cannot recognize tunes played in reverse. • We cannot recognize stereoisomers as such. Enantiomers smell differently.

  9. A-NNs vs B-NNs

  10. Origin of Invariances • Weight sharing and translational invariance. • Can we quantify approximate weight sharing? • Can we use approximate weight sharing to improve performance? • Some of the invariance comes • from the architecture. • Some may come from the • learning rules.

  11. Learning Invariances 1-11 111 E 11-1 symmetric connections wij=wji Hebb Acyclic orientation of the Hypercube O(H) Isometry I(O(H)) O(H) Hebb Hebb I(H) H Isometry

  12. Deep Learning ≈ Deep Targets ? Training set: (xi,yi) or i=1, . . ., m

  13. Deep Target Algorithms

  14. Deep Target Algorithms

  15. Deep Target Algorithms

  16. Deep Target Algorithms

  17. Deep Target Algorithms

  18. In spite of the vanishing gradient problem, (and the Newton problem) nothing seems to beat back-propagation. • Is backpropagation biologically plausible?

  19. Mathematics of Dropout (Cheap Approximation to Training Full Ensemble)

  20. Two Questions • “If we solve computer vision, we have pretty much solved AI.” • A-NNs vs B-NNs and Deep Learning.

More Related