1 / 90

Left over from units..

Explore the plasticity of general cell excitability and the laminar structure of the cortex in neuronal networks. Understand the importance of excitation, inhibition, and constraint satisfaction in network function.

kellyz
Download Presentation

Left over from units..

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Left over fromunits.. • Bias weight • General cell excitability is plastic, regardless of particular synaptic inputs (weights). See review by Mozzachido & Byrne, 2010 TINS on non-synapticplasticity.

  2. Networks Layers and layers ofdetectors...

  3. Networks • Biology of networks: thecortex • Excitation: • Unidirectional(transformations) • Bidirectional (pattern completion,amplification) • Inhibition: Controlling bidirectionalexcitation. • Constraint Satisfaction: Putting it alltogether.

  4. Cortex:Neurons • Two separatepopulations: • Excitatory (glutamate): Pyramidal, Spinystellate. • Inhibitory (GABA): Chandelier,Basket.

  5. More recentimages.. Animation:file:///Users/frankmj/teach/cogsim/ctxhippo.mpg

  6. Excitatory vs InhibitoryNeurons • Excitatory neurons both project locally and make long-range projections between different corticalareas • Inhibitory neurons primarily project within small, localized regions of cortex • Excitatory neurons carry the information flow (long rangeprojections) • Inhibitory neurons are responsible for (locally) regulating the activation of excitatoryneurons

  7. Laminar Structure ofCortex

  8. Layers • Layer = a bunch of neurons with similarconnectivity • Localized to a particular region (physicallycontiguous) • All cells within a layer receive input from approximately the same places (i.e,. from a common collection oflayers) • All cells within a layer send their outputs to approximately the same places (i.e., to a common collection oflayers)

  9. Laminar Structure of Cortex Three FunctionalLayers

  10. Laminar Structure of Cortex Three FunctionalLayers Hidden layer transforms input-outputmappings

  11. Laminar Structure of Cortex Three FunctionalLayers Hidden layer transforms input-outputmappings More hidden layers → richertransformations

  12. Laminar Structure of Cortex Three FunctionalLayers Hidden layer transforms input-outputmappings More hidden layers → richertransformations → less reflex-like... smarter? more“free”?

  13. Networks • Biology: Cortical layers andneurons • Excitation: • Unidirectional(transformations) • Similarity and Distributed Representations • Bidirectional (pattern completion,amplification) • Inhibition: Controlling bidirectionalexcitation. • Constraint Satisfaction: Putting it alltogether.

  14. Networks • Biology: Cortical layers andneurons • Excitation: • Unidirectional(transformations) • Similarity and Distributed Representations • Bidirectional (pattern completion,amplification) • Inhibition: Controlling bidirectionalexcitation. • Constraint Satisfaction: Putting it alltogether.

  15. Excitation (Unidirectional):Transformations

  16. Excitation (Unidirectional):Transformations • Detectors work in parallel to transform input activity pattern to hidden activity pattern.

  17. Excitation (Unidirectional):Transformations • Emphasizes some distinctions, collapses acrossothers.

  18. Excitation (Unidirectional):Transformations • Emphasizes some distinctions, collapses acrossothers. • Function of what detectors detect (and what theyignore).

  19. Emphasizing/CollapsingDistinctions 0 1 2 3 4 5 6 7 8 9 Hidden 2 7 8 3 1 6 5 0 9 4 Input

  20. Emphasizing/CollapsingDistinctions 0 1 2 3 4 5 6 7 8 9 Hidden 2 7 8 3 1 6 5 0 9 4 Input Other (more interesting) examples?...

  21. [transform.proj] • digitdetectors: • tested with noisydigits • tested withletters

  22. Networks • Biology: Cortical layers andneurons • Excitation: • Unidirectional(transformations) • Similarity and Distributed Representations • Bidirectional (pattern completion,amplification) • Inhibition: Controlling bidirectionalexcitation. • Constraint Satisfaction: Putting it alltogether.

  23. Neural Pattern Similarity Motivating points Networks deal with observations (data: cats, tigers, sounds, events,..) as: patterns of spikes/rates over many neurons • If the data are continuous, there is zero probability of seeing exactly any given instance again. Why? (Relatedly, we would need infinitely many rules.) • Even if the data are not actually continuous, they are often practically continuous. Newton’s laws vs QM, etc. • How many binary patterns are there for this 3x3 grid? • 29 = 512, which is basically infinity. For a 5x5 grid? • 225 = 33,554,432, which I cannot count to.

  24. Neural Pattern Similarity Euclidean distance: motivating points • So we usually can’t have rules dealing with individual things. • To class: But then how could we ever tell a network how we want it to act on infinitely many things? • We need to lump things together. Things which are lumped together based on their properties are ______. • How do we do that? How similar are two things? • trivial similarity (identity): If not my toothbrush, not similar enough. • overlap and near overlap (X-Y example) • p-adic metric (google it) if you are wily, + infinitely many more

  25. Euclidean distance: motivating points Overlap and near overlap Add “Y” Four squares “missing”, two squares “added”. List the squares, determine differences, (xi - yi) add them. If we want to these not to cancel, square them: Σ(xi - yi)2

  26. ClusterPlots • Cluster plots provide a means of visualizing similarity relationships between patterns of activity in a network • Cluster plots are constructed based on the distances between patterns of activity • Euclidean distance = sum (across all units) of the squared difference inactivation 2 d= (x −y ) i i i • Example...

  27. [transform.proj] cluster plots (digits, noisy digits,hidden).

  28. Emphasizing/Collapsing Distinctions:Categorization

  29. Emphasizing/Collapsing Distinctions:Categorization Emphasize distinctions: digits separated, even though they have perceptualoverlap.

  30. Emphasizing/Collapsing Distinctions:Categorization Emphasize distinctions: digits separated, even though they have perceptualoverlap. Collapse distinctions: Noisy digits categorized as same, even though they have perceptual differences.

  31. Detectors are Dedicated,Content-Specific

  32. Localist vs. DistributedRepresentations

  33. Localist vs. DistributedRepresentations • Localist = 1 unit responds to 1 thing (e.g., digits, grandmother cell).

  34. Localist vs. DistributedRepresentations • Localist = 1 unit responds to 1 thing (e.g., digits, grandmother cell). • Distributed = Many units respond to 1 thing, one unit responds to manythings.

  35. Localist vs. DistributedRepresentations • Localist = 1 unit responds to 1 thing (e.g., digits, grandmothercell). • Distributed = Many units respond to 1 thing, one unit responds to manythings. • With distributed representations, units correspond to stimulusfeatures • as opposed to complete stimuli

  36. Digits With DistributedRepresentations Hidden Input

  37. Advantages of DistributedRepresentations Efficiency: Fewer UnitsRequired Same question as earlier: Q: How many binary patterns can you encode with these two units?

  38. Advantages of DistributedRepresentations Efficiency: Fewer UnitsRequired The digits network can represent 10 digits using 5 “feature” units Each digit is a unique combination of the 5 features,e.g., “0” = feature3 “1” = features 1,4 “2” = features 1,2 “3” = features 1, 2,5 “4” = features 3,4 “5” = features 1, 2,3 There are 32 unique ways to combine 5features (2n) There are > 1 million unique ways to combine 20features

  39. Advantages of DistributedRepresentations Similarity andGeneralization:

  40. Advantages of DistributedRepresentations Similarity andGeneralization: If you represent stimuli in terms of their constituent features, stimuli with similar features get assigned similarrepresentations This allows you to generalize to novel stimuli based on their similarity to previously encounteredstimuli

More Related