Secrets of Neural Network Models - PowerPoint PPT Presentation

elina
secrets of neural network models l.
Skip this Video
Loading SlideShow in 5 Seconds..
Secrets of Neural Network Models PowerPoint Presentation
Download Presentation
Secrets of Neural Network Models

play fullscreen
1 / 174
Download Presentation
Secrets of Neural Network Models
361 Views
Download Presentation

Secrets of Neural Network Models

- - - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript

  1. Note: These slides have been provided online for the convenience of students attending the 2003 Merck summer school, and for individuals who have explicitly been given permission by Ken Norman. Please do not distribute these slides to third parties without permission from Ken (which is easy to get… just email Ken at knorman@princeton.edu). Secrets of Neural Network Models Ken Norman Princeton University July 24, 2003

  2. The Plan, and Acknowledgements • The Plan: • I will teach you all of the the secrets of neural network models in 2.5 hours • Lecture for the first half • Hands-on workshop for the second half • Acknowledgements: • Randy O’Reilly • my lab: Greg Detre, Ehren Newman, Adler Perotte, and Sean Polyn

  3. The Big Question • How does the gray glop in your head give rise to cognition? • We know a lot about the brain, and we also know a lot about cognition • The real challenge is to bridge between these two levels

  4. Complexity and Levels of Analysis • The brain is very complex: billions of neurons, trillions of synapses, all changing every nanosecond • Each neuron is a very complex entity unto itself • We need to abstract away from this complexity! • Is there some simpler, higher level for describing what the brain does during cognition?

  5. We want to draw on neurobiology for ideas about how the brain performs a particular kind of task • Our models should be consistent with what we know about how the brain performs the task • But at the same time, we want to include only aspects of neurobiology that are essential for explaining task performance

  6. Learning and Development • Neural network models provide an explicit, mechanistic account of how the brain changes as a function of experience • Goals of learning: • To acquire an internal representation (a model) of the world that allows you to predict what will happen next, and to make inferences about “unseen” aspects of the environment • The system must be robust to noise/degradation/damage • Focus of workshop: Use neural networks to explore how the brain meets these goals

  7. Outline of Lecture • What is a neural network? • Principles of learning in neural networks: • Hebbian learning: Simple learning rules that are very good at extracting the statistical structure of the environment (i.e., what things are there in the world, and how are they related to one another) • Shortcomings of Hebbian learning: It’s good at acquiring coarse category structure (prototypes) but it’s less good at learning about atypical stimuli and arbitrary associations • Error-driven learning: Very powerful rules that allow networks to learn from their mistakes

  8. Outline, Continued • The problem of interference in neocortical networks, and how the hippocampus can help alleviate this problem • Brief discussion of PFC and how networks can support active maintenance in the face of distracting information • Background information for the “hands-on” portion of the workshop

  9. Overall Philosophy • The goal is to give you a good set of intuitions for how neural networks function • I will simplify and gloss over lots of things. • Please ask questions if you don’t understand what I’m saying...

  10. What is a neural network? • Neurons measure how much input they receive from other neurons; they “fire” (send a signal) if input exceeds a threshold value • Input is a function of firing rate and connection strength • Learning in neural networks involves adjusting connection strength

  11. What is a neural network? • Key simplifications: • We reduce all of the complexity of neuronal firing to a single number, the activity of the neuron, that reflects how often the neuron is spiking • We reduce all of the complexity of synaptic connections between neurons to a single number, the synaptic weight, that reflects how strong the connection is

  12. Neurons are Detectors • Each neuron is detecting some set of conditions (e.g., smoke detector). Representation is what is detected.

  13. Understanding Neural Components in Terms of the Detector Model

  14. Detector Model • Neurons feed on each other’s outputs; layers of ever more complicated detectors • Things can get very complex in terms of content, but each neuron is still carrying out the basic detector function

  15. Two-layer Attractor Networks Hidden Layer (Internal Representation) Input/Output Layer • Model of processing in neocortex • Circles = units (neurons); lines = connections (synapses) • Unit brightness = activity; line thickness = synaptic weight • Connections are symmetric

  16. I Two-layer Attractor Networks Hidden Layer (Internal Representation) Input/Output Layer • Units within a layer compete to become active. • Competition is enforced by inhibitoryinterneurons that sample the amount of activity in the layer and send back a proportional amount of inhibition • Inhibitory interneurons prevent epilepsy in the network • Inhibitory interneurons are not pictured in subsequent diagrams

  17. I Two-layer Attractor Networks Hidden Layer (Internal Representation) Input/Output Layer • These networks are capable of sustaining a stable pattern of activity on their own. • “Attractor” = a fancy word for “stable pattern of activity” • Real networks are much larger than this, also > 1 unit is active in the hidden layer...

  18. Properties of Two-Layer Attractor Networks • I will show that these networks are capable of meeting the “learning goals” outlined • Given partial information (e.g., seeing something that has wings and features), the networks can make a “guess” about other properties of that thing (e.g., it probably flies) • Networks show graceful degradation

  19. “Pattern Completion” in two layer networks wings beak feathers flies

  20. “Pattern Completion” in two layer networks wings beak feathers flies

  21. “Pattern Completion” in two layer networks wings beak feathers flies

  22. “Pattern Completion” in two layer networks wings beak feathers flies

  23. Networks are Robust to Damage, Noise wings beak feathers flies

  24. Networks are Robust to Damage, Noise wings feathers flies

  25. Networks are Robust to Damage, Noise wings feathers flies

  26. Networks are Robust to Damage, Noise wings feathers flies

  27. Networks are Robust to Damage, Noise wings feathers flies

  28. Learning: Overview • Learning = changing connection weights • Learning rules: How to adjust weights based on local information (presynaptic and postsynaptic activity) to produce appropriate network behavior • Hebbian learning: building a statistical model of the world, without an explicit teacher... • Error-driven learning: rules that detect undesirable states and change weights to eliminate these undesirable states...

  29. Building a Statistical Model of the World • The world is inhabited by things with relatively stable sets of features • We want to wire detectors in our brains to detect these things. How can we do this? • Answer: Leverage correlation • The features of a particular thing tend to appear together, and to disappear together; a thing is nothing more than a correlated cluster of features • Learning mechanisms that are sensitive to correlation will end up representing useful things

  30. Hebbian Learning • How does the brain learn about correlations? • Donald Hebb proposed the following mechanism: • When the pre-synaptic neuron and post-synaptic neuron are active at the same time, strengthen the connection between them • “neurons that fire together, wire together”

  31. Hebbian Learning

  32. Hebbian Learning

  33. Hebbian Learning

  34. Hebbian Learning • Proposed by Donald Hebb • When the pre-synaptic (sending) neuron and post-synaptic (receiving) neuron are active at the same time, strengthen the connection between them • “neurons that fire together, wire together” • When two neurons are connected, and one is active but the other is not, reduce the connections between them • “neurons that fire apart, unwire”

  35. Hebbian Learning

  36. Hebbian Learning

  37. Biology of Hebbian Learning: NMDA-Mediated Long-Term Potentiation

  38. Biology of Hebbian Learning: • Long-Term Depression • When the postsynaptic neuron is depolarized, but presynaptic activity is relatively weak, you get weakening of the synapse

  39. What Does Hebbian Learning Do? • Hebbian learning tunes units to represent correlated sets of input features. • Here is why: • Say that a unit has 1,000 inputs • In this case, turning on and off a single input feature won’t have a big effect on the unit’s activity • In contrast, turning on and off a large cluster of 900 input features will have a big effect on the unit’s activity

  40. Hebbian Learning

  41. Hebbian Learning

  42. Hebbian Learning • Because small clusters of inputs do not reliably activate the receiving unit, the receiving unit does not learn much about these inputs

  43. Hebbian Learning

  44. Hebbian Learning

  45. Hebbian Learning

  46. Hebbian Learning Big clusters of inputs reliably activate the receiving unit, so the network learns more about big (vs. small) clusters (the “gang effect”).

  47. Hebbian Learning Big clusters of inputs reliably activate the receiving unit, so the network learns more about big (vs. small) clusters (the “gang effect”).

  48. What Does Hebbian Learning Do? • Hebbian learning finds the thing in the world that most reliably activates the unit, and tunes the unit to like that thing even more!

  49. Hebbian Learning scaly slithers wings beak feathers flies

  50. Hebbian Learning scaly slithers wings beak feathers flies