1 / 14

Computational Cognitive Neuroscience Lab

Computational Cognitive Neuroscience Lab. Today: Model Learning. Computational Cognitive Neuroscience Lab. Today: Homework is due Friday, Feb 17 Chapter 4 homework is shorter than the last one! Undergrads omit 4.4, 4.5, 4.7c, 4.7d. Hebbian Learning.

michi
Download Presentation

Computational Cognitive Neuroscience Lab

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Computational Cognitive Neuroscience Lab Today: Model Learning

  2. Computational Cognitive Neuroscience Lab • Today: • Homework is due Friday, Feb 17 • Chapter 4 homework is shorter than the last one! • Undergrads omit 4.4, 4.5, 4.7c, 4.7d

  3. Hebbian Learning • “Neurons that fire together, wire together” • Correlations between sending and receiving activity strengthens the connection between them • “Don’t fire together, unwire” • Anti-correlation between sending and receiving activity weakens the connection

  4. LTP/D via NMDA receptors • NMDA receptors allow calcium to enter the (postsynaptic) cell • NMDA are blocked by Mg+ ions, which are cast off when the membrane potential increases • Glutamate (excitatory) binds to unblocked NMDA receptor, causes structural change that allows Ca++ to passthrough

  5. Calcium and Synapses • Calcium initiates multiple chemical pathways, dependent on the level of calcium • Low Ca++ long term depression (LTD) • High Ca++ long term potentiation (LTP) • LTP/D effects: new postsynaptic receptors, incresed dendritic spine size, or increased presynaptc release processes (via retrograde messenger)

  6. Fixing Hebbian learning • Hebbian learning results in infinite weights! • Oja’s normalization (savg_corr) • When to learn? • Conditional PCA--learn only when you see something interesting • A single unit hogs everything? • kWTA and Contrast enhancement --> specialization

  7. Principal Components Analysis (PCA) • Principal, as in primary, not principle, as in some idea • PCA seeks a linear combination of variables such that maximum variance is extracted from the variables. It then removes this variance and seeks a second linear combination which explains the maximum proportion of the remaining variance, and so on until you run out of variance.

  8. PCA continued • This is like linear regression, except you take the whole collection of variables (vector) and correlate it with itself to make a matrix. • This is kind of like linear regression, where a whole collection of variables is regressed on itself • The line of best fit through this regression is the first principal component!

  9. PCA cartoon

  10. Conditional PCA • “Perform PCA only when a particular input is received” • Condition: The forces that determine when a receiving unit is active • Competition means hidden units will specialize for particular inputs • So hidden units only learn when their favorite input is available

  11. Self-organizing learning • kWTA determines which hidden units are active for a given input • CPCA ensures those hidden units learn only about a single aspect of that input • Contrast enhancement -- drive high weighs higher, low weights lower • Contrast enhancement helps units specialize (and share)

  12. Bias-variance dilemma • High bias--actual experience does not change model much, so biases better be good! • Low bias--experience highly determines learning, so does random error! Model could be different, high model variance

  13. Architecture as Bias • Inhibition drives competition, and competition determines which units are active, and the unit activity determines learning • Thus, deciding which units share inhibitory connections (are in the same layer) will affect the learning • This architecture is the learning bias!

  14. Fidelity and Simplicity of representations • Information must be lost in the world-to-brain transformation (p118) • There is a tradeoff in the amount of information lost, and the complexity of the representation • Fidelity / simplicity tradeoff is set by • Conditional PCA (first principal component only) • Competition (k value) • Contrast enhancement (savg_corr, wt_gain)

More Related