1 / 11

The search for organizing principles of brain function

The search for organizing principles of brain function. Needed at multiple levels: synapse => cell => brain area (cortical maps) => hierarchy of areas Self-organization: Hebbian learning => feature-analyzing cells => cortical maps

mauve
Download Presentation

The search for organizing principles of brain function

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The search for organizing principles of brain function • Needed at multiple levels: synapse => cell => brain area (cortical maps) => hierarchy of areas • Self-organization: Hebbian learning => feature-analyzing cells => cortical maps • Information theory, a neural optimization principle, and applications • Prediction, control, and the “local cortical circuit” (LCC)

  2. Self-organization • Pattern formation (Turing, 1952) from simple local rules (e.g., Hebb, 1949) • Hebb rule: When the firing of cell A contributes to that of cell B, increase the efficiency (synaptic strength) with which A excites B to fire. • An early puzzle: How does a layer of orientation-selective cells (Hubel & Wiesel, 1960-70s) form? • An early example of the power of Hebb learning: Hebb rule + short connections + locally-correlated random electrical activity, can => orientation-selective cells & their patterning in a layer (RL)

  3. Self-organization in cortical models Orientation map (below; R Linsker, 1986) • Movie: J Sirosh, R Miikkulainen, & JA Bednar (UT Austin), 1996 [courtesy JA Bednar] • http://www.cs.utexas.edu/~nn/web-pubs/htmlbook96/sirosh/or_quad.mpg • Click for movie: or_quad.mov

  4. Some higher-level properties that can result from Hebbian learning • Feature-analyzing (selective) cells. • “Infomax” principle (RL): Create a layer of cells whose outputs convey maximum (Shannon) information about its inputs, subject to biological constraints & costs (types of allowed proc’g, wiring length, energy cost, etc.). An optimal encoding principle. • Various uses of infomax • Models of neural learning & development • Qual’ve (RL, others) and quant’ve (Atick et al.) exp’tal agreement • Infomax-based ICA (independent component analysis) (Bell & Sejnowski, 1995): Reconstructs N statistically independent sources, given N linear combinations of them. • Nonlinear infomax is one way to generate “sparse representations.” Sparse coding used to reconstruct 3 speech sources given only the composite signal at each of 2 receivers (RL, 2001)

  5. Sparse representation of mixture of sources freq. time

  6. Labeling using a source signature freq. time Can obtain source signature from: - Relative transfer function (attenuation & phase shift at each frequency) from source to two rcvrs (used here). - Other methods: Pitch tracking; phoneme properties; can de-mix two overlapping sources using two received mixtures, etc. (None used here.)

  7. Masking & reconstruction freq. time

  8. Acoustic separation demo • Mixture of 3 stereo speech sources • Source 1: reconstruction & original • Source 2: reconstruction & original • Source 3: reconstruction & original

  9. The “local cortical circuit” (LCC) • Substantial uniformity of cell org’n & connectivity across neocortical areas (Mountcastle) • Core functions of the LCC “module”? • A recurrent neural net that can combine “bottom-up” data and “top-down” expectations. LCC role in: forming generalizations? stabilizing feature analysis within each cortical processing area? Bayesian inference? • It’s long been clear that prediction, estimation, inference, & goal-directed motor control are important functions of mammalian brains. • Recent work (RL): A neural net alg’m for optimal Kalman estimation (pred’n) and control. The alg’m implies a set of constraints on the NN circuitry & signal flows. This architecture turns out to be similar to LCC.

  10. Some other important unsolved problems • “Fast learning”: animals vs. neural nets • Learning causal relations: deterministic or statistical? Learning powerful invariances and the “right” representations. Is statistical learning over-emphasized? • Principles governing the processing, segregation, & integration of information streams (e.g., color, form, “what” & “where”)? • Common ground between perception & human concept formation: Learning similarity metrics that are useful for forming generalizations & for behavior. • How is information coded? (Firing rates, spike timing, place coding, synchrony & phase-locking, …?) • What representations are really used by the brain? Some surprises -- e.g., “change blindness” (R Rensink demo). • The “binding problem”; self-awareness & consciousness • Tools: How to probe circuit dynamics (of multiple interconnected cells) at fine spatial & temporal resolution?

More Related