1 / 25

PSY105 Neural Networks 5/5

PSY105 Neural Networks 5/5. 5. “Function – Computation - Mechanism”. Lecture 1 recap. We can describe patterns at one level of description that emerge due to rules followed at a lower level of description.

Download Presentation

PSY105 Neural Networks 5/5

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. PSY105 Neural Networks 5/5 5. “Function – Computation - Mechanism”

  2. Lecture 1 recap • We can describe patterns at one level of description that emerge due to rules followed at a lower level of description. • Neural network modellers hope that we can understand behaviour by creating models of networks of artificial neurons.

  3. Lecture 2 recap • Simple model neurons • Transmit a signal of (or between) 0 and 1 • Receive information from other neurons • Weight this information • Can be used to perform any computation

  4. Lecture 3 recap • Classical conditioning is a simple form of learning which can be understood as an increase in the weight (‘associative strength’) between two stimuli (one of which is associated with an ‘unconditioned response’)

  5. Lecture 4 recap • A Hebb Rule for weight change between two neurons is: • Δ weight = activity 1 x activity 2 x learning rate constant • In order to use this rule to associate two stimuli which are separated in time we need neuron activity associated with stimuli to persist in time. • This can be implemented as an ‘eligibility trace’

  6. I present you with a robot which uses a simple neural network to acquire classically conditioned responses. It can, for example, learn to associate a warning stimulus with an upcoming wall and hence turn around before it reaches the wall. Describe an experiment which you would do to test the details of how the robot learns. Say what you would do, and what aspect(s) of the robot's learning the results would inform you of, and why.

  7. The problem of continuous time Stimulus 1 Stimulus 2

  8. Traces Stimulus 1 Stimulus 2

  9. Consequences of this implementation • Intensity of CS stimulus • Duration of CS stimulus • Intensity of UCS stimulus • Duration of UCS stimulus • Separation in time of CS and UCS • The order in which the CS and UCS occur • (cf. Rescola-Wagner discrete time model)

  10. We have designed an information processing system that learns associations Association Sutton, R.S., Barto, A.G. (1990). Time-derivative models of pavlovian reinforcement. In Learning and Computational Neuroscience: Foundations of Adaptive Networks, M. Gabriel and J. Moore, Eds., pp. 497-537. MIT Press. Trials

  11. Without continued pairing Stop pairing Association Trials

  12. Without continued pairing -> extinction Stop pairing Association Trials

  13. Analysis of information processing systems • Function (‘computational level’) • Computation (‘algorithmic level’) • Mechanism (‘implementational level’) Marr, David (1982) Vision. San Francisco: W.H. Freeman

  14. Marrian analysis of classical condtioning • Function: learn to predict events based on past experience • Computation: Stimuli evoke ‘eligibility traces’. Hebb Rule governs changes in weights [+ other additional assumptions which are always needed when you try and make a computational recipe] • Mechanism: At least one response neuron, one unconditioned stimulus neuron and one neuron for each conditioned stimulus

  15. Kim, J. J., & Thompson, R. F. (1997). Cerebellar circuits and synaptic mechanisms involved in classical eyeblink conditioning. Trends in Neurosciences, 20(4), 177-181.

  16. Marrian analysis: a simple example • Function • Computation • Mechanism

  17. Theory < - > Experiments Synthesis < - > Analysis

  18. Our classical conditioning networks S-S link CS2 CS1 UCS Stimuli Responses

  19. Internal representation of the conditioned stimulus

  20. The lab : using Marrian analysis to make predictions

  21. Function • What is the purpose of learning for an animal? • Does our model behave in a sensible (‘adaptive’) way when it follows our rule? • Is the rule sufficient to explain animal learning? • Test: think of a way you would want the model/robot to behave, test if it does

  22. Computation • Intensity of CS stimulus • Duration of CS stimulus • Intensity of UCS stimulus • Duration of UCS stimulus • Separation in time of CS and UCS • The order in which the CS and UCS occur • (cf. Rescola-Wagner discrete time model) • The learning rate • The rate of decay of the trace • The frequency of pairing

  23. Mechanism S-S link CS2 CS1 UCS Stimuli Responses

  24. What is your prediction? What will you do to the rule or the environment? • How will you know if it has been confirmed or falsified?

  25. π ψ Ω

More Related