1 / 31

Different classes of abstract models: - Supervised learning (EX: Perceptron)

Different classes of abstract models: - Supervised learning (EX: Perceptron) Reinforcement learning Unsupervised learning (EX: Hebb rule) Associative memory (EX: Matrix memory). Abstraction – so what is a neuron? Threshold unit (McCullough-Pitts) Linear: Sigmoid:.

Download Presentation

Different classes of abstract models: - Supervised learning (EX: Perceptron)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Different classes of abstract models: • - Supervised learning (EX: Perceptron) • Reinforcement learning • Unsupervised learning (EX: Hebb rule) • Associative memory (EX: Matrix memory)

  2. Abstraction – so what is a neuron? • Threshold unit (McCullough-Pitts) • Linear: • Sigmoid:

  3. THE PERCEPTRON: (Classification) Threshold unit: where is the output for input pattern , are the synaptic weights and is the desired output AND w1 w2 w3 w4 w5

  4. 1 0 1 -1.5 1 1 AND Linearly seprable

  5. -0.5 1 1 OR 1 0 1 Linearly separable

  6. Perceptron learning rule: A Convergence Proof exists w1 w2 w3 w4 w5

  7. Show examples of Perceptron learning with demo program • Show the program itself • Talk about linear seperability, define dot product, show on computer.

  8. Unsupervised learning – the “Hebb” rule. where xiare the inputs and y the output is assumed linear: Results in 2D

  9. Example of Hebb in 2D w • (Note: here inputs have a mean of zero) • Show program, tilt axis, look a divergence

  10. Why do we get these results? • On the board: • Solve simple linear first order ODE • Fixed points and their stability for non linear ODE • Eigen-values, Eigen-vectors

  11. In the simplest case, the change in synaptic weight w is: where x are input vectors and y is the neural response. Assume for simplicity a linear neuron: So we get: Now take an average with respect to the distribution of inputs, get:

  12. If a small change Δw occurs over a short time Δt then: (in matrix notation) If <x>=0 , Q is the covariance matrix. What is then the solution of this simple first order linear ODE ? (Show on board)

  13. Show program of Hebb rule again • Show effect of saturation limits • Possible solution – normalization • Oja (PCA) rule

  14. 1 0.8 0.6 0.4 0.2 0 -0.2 -0.4 -0.6 -0.8 -1 -1 -0.5 0 0.5 1 Show PCA program: W1 W2 OK- some more programming – convert Hebb program To Oja rule program

  15. So OK , simulations, matlab, mathhhh etc. What does this have to do with Biology, with the brain?

  16. Another unsupervised learning model: The BCM theory of synaptic plasticity. The BCM theory of cortical plasticity BCM stands for Bienestock Cooper and Munro, it dates back to 1982. It was designed in order to account for experiments which demonstrated that the development of orientation selective cells depends on rearing in a patterned environment.

  17. (Bienenstock, Cooper, Munro 1982; Intrator, Cooper 1992) BCM Theory Requires • Bidirectional synaptic modification LTP/LTD • Sliding modification threshold • The fixed points depend on the environment, and in a patterned environment only selective fixed points are stable. LTD LTP

  18. The integral form of the average: Is equivalent to this differential form: Note, it is essential that θm is a superlinear function of the history of C, that is: with p>0 Note also that in the original BCM formulation (1982) rather then

  19. What is the outcome of the BCM theory? Assume a neuron with N inputs (N synapses), and an environment composed of N different input vectors. A N=2 example: What are the stable fixed points of m in this case? x1 x2

  20. (Notation: ) Note:Every time a new input is presented, m changes, and so does θm x1 x2 What are the fixed points? What are the stable fixed points?

  21. The integral form of the average: Is equivalent to this differential form: Alternative form: Show matlab example:

  22. Two examples with N= 5 Note: The stable FP is such that for one pattern yi=∑wixi =θm while for the othersy(i≠j)=0. (note: here c=y)

  23. BCM TheoryStability • One dimension • Quadratic form • Instantaneous limit

  24. BCM TheorySelectivity • Two dimensions • Two patterns • Quadratic form • Averaged threshold , • Fixed points

  25. BCM Theory: Selectivity • Learning Equation • Four possible fixed points , (unselective) , (Selective) , (Selective) , (unselective) • Threshold

  26. Summary • The BCM rule is based on two differential equations, what are they? • When there are two linearly independent inputs, what will be the BCM stable fixed points? What will θ be? • When there are K independent inputs, what are the stable fixed points? What will θ be? • Bonus project – 10 extra points for section • Write in matlab a code for a BCM neuron trained with 2 inputs in 2D. Include a 1 page write-up and you will also meet with me for about 15 minutes to explain code and results.

  27. Associative memory: Famous images Names Albert Input desired output Marilyn . . . . . . • Feed forward matrix networks • Attractor networks (auto-associative) Harel

  28. Linear matrix memory: N input neurons, M output neurons: P input output pairs 1. Set synaptic weights by Hebb rule 2. Present input – output is a linear operation w1µ w2µ wNµ

  29. 1. Hebb rule: 2. Linear output: Here you are on your own – write a matlab program to do this. Tip – use large N, small P, start with orthogonal patterns.

  30. A low-D example of a linear matrix memory, do on the board. • Use simple Hebb rule between input and desired output. • Orthogonal inputs • Non Orthogonal inputs • Give examples • Might require other rules, Covariance, Perceptron

  31. Formal neural networks can accomplish many tasks, for example: • Perform complex classification • Learn arbitrary functions • Account for associative memory • Some applications: Robotics, Character recognition, Speech recognition, • Medical diagnostics. • This is not Neuroscience, but is motivated loosely by neuroscience and carries important information for neuroscience as well. • For example: Memory, learning and some aspects of development are assumed to be based on synaptic plasticity.

More Related