1 / 27

Associative Learning

Associative Learning. Simple Associative Network. Banana Associator. Unconditioned Stimulus. Conditioned Stimulus. Unsupervised Hebb Rule. Vector Form:. Training Sequence:. 0. 0. a. (. 1. ). =. h. a. r. d. l. i. m. (. w. p. (. 1. ). +. w. (. 0. ). p. (. 1. ). –.

india
Download Presentation

Associative Learning

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Associative Learning

  2. Simple Associative Network

  3. Banana Associator Unconditioned Stimulus Conditioned Stimulus

  4. Unsupervised Hebb Rule Vector Form: Training Sequence:

  5. 0 0 a ( 1 ) = h a r d l i m ( w p ( 1 ) + w ( 0 ) p ( 1 ) – 0.5 ) = h a r d l i m ( 1 × 0 + 0 × 1 – 0.5 ) = 0 (no response) Banana Recognition Example Initial Weights: Training Sequence: a = 1 First Iteration (sight fails):

  6. 0 0 a ( 2 ) = h a r d l i m ( w p ( 2 ) + w ( 1 ) p ( 2 ) – 0.5 ) = h a r d l i m ( 1 × 1 + 0 × 1 – 0.5 ) = 1 (banana) Example Second Iteration (sight works): Third Iteration (sight fails): Banana will now be detected if either sensor works.

  7. Problems with Hebb Rule • Weights can become arbitrarily large • There is no mechanism for weights to decrease

  8. Hebb Rule with Decay This keeps the weight matrix from growing without bound, which can be demonstrated by setting both ai and pj to 1:

  9. 0 0 a ( 1 ) = h a r d l i m ( w p ( 1 ) + w ( 0 ) p ( 1 ) – 0.5 ) = h a r d l i m ( 1 × 0 + 0 × 1 – 0.5 ) = 0 (no response) 0 0 a ( 2 ) = h a r d l i m ( w p ( 2 ) + w ( 1 ) p ( 2 ) – 0.5 ) = h a r d l i m ( 1 × 1 + 0 × 1 – 0.5 ) = 1 (banana) Example: Banana Associator a = 1 g = 0.1 First Iteration (sight fails): Second Iteration (sight works):

  10. Example Third Iteration (sight fails): Hebb Rule Hebb with Decay

  11. Problem of Hebb with Decay • Associations will decay away if stimuli are not occasionally presented. If ai = 0, then If g = 0, this becomes Therefore the weight decays by 10% at each iteration where there is no stimulus.

  12. Instar (Recognition Network)

  13. T w p w p = cos q ³ – b 1 1 Instar Operation The instar will be active when or For normalized vectors, the largest inner product occurs when the angle between the weight vector and the input vector is zero -- the input vector is equal to the weight vector. The rows of a weight matrix represent patterns to be recognized.

  14. w p b = – 1 w p b > – 1 w 1 Vector Recognition If we set the instar will only be active when q=0. If we set the instar will be active for a range of angles. As b is increased, the more patterns there will be (over a wider range of q) which will activate the instar.

  15. w ( q ) = w ( q – 1 ) + a a ( q ) p ( q ) – g a ( q ) w ( q – 1 ) i j i j i j i i j Instar Rule Hebb with Decay Modify so that learning and forgetting will only occur when the neuron is active - Instar Rule: or Vector Form:

  16. Graphical Representation For the case where the instar is active (ai =1): or For the case where the instar is inactive (ai =0):

  17. Example

  18. Training First Iteration (a=1):

  19. æ ö 1 ç ÷ 0 0 W p a ( 2 ) = h a r d l i m ( w p ( 2 ) + ( 2 ) – 2 ) = h a r d l i m 3 × 1 + – 2 = 1 ç ÷ 0 0 0 – 1 ç ÷ – 1 è ø (orange) æ ö 1 ç ÷ 0 0 W p a ( 3 ) = h a r d l i m ( w p ( 3 ) + ( 3 ) – 2 ) = h a r d l i m 3 × 0 + – 2 = 1 ç ÷ 1 – 1 – 1 – 1 ç ÷ – 1 (orange) è ø Further Training Orange will now be detected if either set of sensors works.

  20. Kohonen Rule Learning occurs when the neuron’s index i is a member of the set X(q). We will see in Chapter 14 that this can be used to train all neurons in a given neighborhood.

  21. Outstar (Recall Network)

  22. Outstar Operation Suppose we want the outstar to recall a certain pattern a* whenever the input p=1 is presented to the network. Let Then, when p=1 and the pattern is correctly recalled. The columns of a weight matrix represent patterns to be recalled.

  23. Outstar Rule For the instar rule we made the weight decay term of the Hebb rule proportional to the output of the network. For the outstar rule we make the weight decay term proportional to the input of the network. If we make the decay rate g equal to the learning rate a, Vector Form:

  24. Example - Pineapple Recall

  25. Definitions

  26. Iteration 1 a = 1

  27. Convergence

More Related