1 / 35

University Studies 15A: Consciousness I

University Studies 15A: Consciousness I. Neural Network Modeling (Round 2). Let us begin again with the problem that neuroscientists confronted with the 100-step rule.

faxon
Download Presentation

University Studies 15A: Consciousness I

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. University Studies 15A:Consciousness I Neural Network Modeling (Round 2)

  2. Let us begin again with the problem that neuroscientists confronted with the 100-step rule. That is, they knew that whatever the brain was doing, it was using massive parallelism to produce responses in no more that 100 hundred sequences of neurons firing. So, we have a brain:

  3. If we unfold and flatten the neocortex we get: Two sheets of interconnected ,six-layered neuronal assemblies, connected by the corpus callosum The work of the brain is done by the neuronal clusters becoming activated and activating additional neuronal clusters in turn.

  4. Experiential Level: Seeing, hearing, remembering, deciding, acting Brain Level: One set of neurons becomes activated, activating another set, which in turn activates yet another set. This continues through 100 steps of transmitted activation. For example:

  5. Information about light comes in from the retina to the primary visual cortex:

  6. The primary visual cortex passes the activation on to higher processing layers:

  7. Each layer processes the activation by aggregating large number of simple patterns into a smaller number of more complex patterns: Remember that we started with just angled line segments in the “simple cells: of V1. What we “see” is a complex reconstruction built from many layers of input that had been divided into separate streams and reassembled. And keep those feedback connection in mind.

  8. To construct what we “see,” activation passes from the visual cortex to layers of neurons that synthesize the input from different sensory sources:

  9. The neural clusters embodying the Visual System then continue further and connect to those that embody the Semantic Systems and visual object recognition: Despite the many layers, there are fewer than one hundred layer. Because the brain is processing all the activation information in parallel, the activation passes quickly from layer to layer.

  10. So, when you see this picture Your visual system very quickly uses the feedback connections from higher memory of objects and draws on your knowledge of dalmatians to fill in the missing information.

  11. Experiential Level: Seeing, hearing, remembering, deciding, acting Brain Level: One set of neurons becomes activated, activating another set, which in turn activates yet another set. This continues through 100 steps of transmitted activation. How the Trick is Done: Neural Networks

  12. Each level of processing in the brain is a “cell assembly,” that is, layer of neurons.

  13. This is a cell assembly of neurons in a fly’s retina.

  14. This picture shows the connection of one layer to neurons to the next.

  15. The connection of neurons of one layer to those of the next is through the synaptic junction: Various factors control the strength of the connection between neurons at the synaptic junction: Number of vesicles of neurotransmitter in the “sending” neuron Structural changes in the junction gap. Number of receptors on the “receiving” neuron All of these can change

  16. Back to the Neuron and our Schematic Model of one

  17. Our Schematic Representation of Layers of Cell Assemblies: a(1) O1 b(1) a(2) … a(3) b(j) Oj … … a(i) b(m) Om … a(n) The strength of the synaptic connection between neurons in the two layers—a(i) and b(j)—is represented by wi,j.

  18. This set of weights defines a weighting matrix of dimension (m,n) (columns for Layer A, rows for Layer B) Wn,m =

  19. Experiential Level: Seeing, hearing, remembering, deciding, acting Brain Level: One set of neurons becomes activated, activating another set, which in turn activates yet another set. This continues through 100 steps of transmitted activation. How the Trick is Done: Neural Networks At each step, the activation of a Layer B derives from the sum of the input synaptic activations from Layer A multiplied by the strength of the synaptic junction. Or: = W ∙ Since all the neurons in Layer B are receiving input at the same time, calculating the activation of the entire layer occurs simultaneously.

  20. Experiential Level: Learning and memory Brain Level: Neurons in one cell assembly change the strength of their connections to neurons in the next cell assembly by changing the structure of the synaptic connection. How we represent memory and learning in Neural Network models Memory: all memory resides in the Weighting Matrices that represents the structure of synaptic connections in the system Activation-based Learning: changing weights in the Weighting Matrices, using Hebb’s Rule. ∆wij= aibj

  21. At every level, as the brain passes activation from layer to layer, they adjust their patterns of synaptic strength.

  22. At every level, the boxes representing functional units in the brain actually have their own internal structures of cell assemblies, and these also have their own changing patterns of synaptic connections, their own W.

  23. Experiential Level: “Things” in memory: apples, houses, words, ideas Learned abilities: riding a bicycle Brain Level: They are all patterns of synaptic connections. Modeling: It is all theWs. This has implications, because the layers in a network operate as a system rather than as independent neurons.

  24. Remember our simple set of artificial neurons: Sixteen input units are connected to two output units Only two input units are active at a time. They must be horizontal or vertical neighbors Only one output unit can be active at a time (inhibition is marked by the black dots). Trial 1 Trial 2 Trial 3

  25. Trial 1 Trial 2 Trial 3 We have trained this network on a simulation. If one used this as a “perception” unit that passed its internal state onto other layers, those other layers would only know of two “objects” activated by the input layer. How it would “see” the 16 input units would vary from Trial 1 to 3, but it divides the input space into just two “things” as patterns of connection.

  26. ALL “objects” from cars and people to concepts like cuteness or justice, are mutually defined partitions in very, very high level input spaces.

  27. In a word, this is your brain at work:

  28. Neural networks extract patterns and divide an input space. This can lead to odd results with implications for biological neural networks. David McClelland tested the ability of a neural network to build a classification tree based on closeness of attributes. He built a network that could handle simple property statements like: Robin can grow, move, fly. Oak can grow. Salmon has scales, gills, skin. Robin has wings, feathers, skin. Oak has bark, branches, leaves, roots.

  29. Baars and Gage discuss this and give the design:

  30. Neural Network software turns this sort of design into a computer program to simulate the network:

  31. When one runs the simulation, the result is a tree that did a good job: What Baars and Gage do not discuss was the next step. McClelland fed the system facts about penguins: Penguin can swim, move, grow.Penguin has wings, feathers, skin.

  32. The results were profoundly different if they gave the facts about penguins interleaved with facts about the other objects or if it was all penguins all the time: We’ll come back to this result when we discuss memory and sleep.

  33. Artificial neural networks like the “penguin learner” allow researchers to model the behavior of neural systems An important aspect of neural networks in the brain that people explored through artificial networks is the brain’s use of recurrency, when nodes in networks loop back on themselves. Simulated models show that one absolutely crucial feature of recurrent networks is the ability to complete partial patterns: The image of the Dalmatian is very incomplete, but the brain feeds back knowledge of Dalmatians to the visual system, which then produces a yet more complete view and cycles in loops until perception settles into “Dalmation.”

  34. These sorts of pattern-completing, self-modifying networks appear throughout the brain. Baars and Gage stress that 90% of the connections between the thalamus and V1 go from V1 to the thalamus as re-entrant connections rather than feed-forward input. Many neural net modelers have developed systems based on re-entrant brain connectivity:

  35. To sum up: = W ∙ ∆wij= aibj W It’s all done with neural networks.

More Related