1 / 21

Rescorla-Wagner (1972) Theory of Classical Conditioning

Rescorla-Wagner (1972) Theory of Classical Conditioning. Rescorla-Wagner Theory (1972). Organisms only learn when events violate their expectations (like Kamin’s surprise hypothesis) Expectations are built up when ‘significant’ events follow a stimulus complex

aminia
Download Presentation

Rescorla-Wagner (1972) Theory of Classical Conditioning

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Rescorla-Wagner (1972) Theory of Classical Conditioning

  2. Rescorla-Wagner Theory (1972) • Organisms only learn when events violate their expectations (like Kamin’s surprise hypothesis) • Expectations are built up when ‘significant’ events follow a stimulus complex • These expectations are only modified when consequent events disagree with the composite expectation

  3. Rescorla-Wagner Theory • These concepts were incorporated into a mathematical formula: • Change in the associative strength of a stimulus depends on the existing associative strength of that stimulus and all others present • If existing associative strength is low, then potential change is high; If existing associative strength is high, then very little change occurs • The speed and asymptotic level of learning is determined by the strength of the CS and UCS

  4. Rescorla-Wagner Mathematical Formula ∆Vcs = c (Vmax – Vall) • V = associative strength • ∆ = change (the amount of change) • c = learning rate parameter • Vmax = the maximum amount of associative strength that the UCS can support • Vall = total amount of associative strength for all stimuli present • Vcs = associative strength to the CS

  5. Before conditioning begins: • Vmax = 100 (number is arbitrary & based on the strength of the UCS) • Vall = 0 (because no conditioning has occurred) • Vcs = 0 (no conditioning has occurred yet) • c = .5 (c must be a number between 0 and 1.0 and is a result of multiplying the CS intensity by the UCS intensity)

  6. First Conditioning Trial Trial c(Vmax - Vall) = ∆Vcs 1 .5 * 100 - 0 = 50

  7. Second Conditioning Trial Trial c(Vmax - Vall) = ∆Vcs 2 .5 * 100 - 50 = 25

  8. Third Conditioning Trial Trial c(Vmax - Vall) = ∆Vcs 3 .5 * 100 - 75 = 12.5

  9. 4th Conditioning Trial Trial c(Vmax - Vall) = ∆Vcs 4 .5 * 100 - 87.5 = 6.25

  10. 5th Conditioning Trial Trial c(Vmax - Vall) = ∆Vcs 5 .5 * 100 - 93.75 = 3.125

  11. 6th Conditioning Trial Trial c(Vmax - Vall) = ∆Vcs 6 .5 * 100 - 96.88 = 1.56

  12. 7th Conditioning Trial Trial c(Vmax - Vall) = ∆Vcs 7 .5 * 100 - 98.44 = .78

  13. 8th Conditioning Trial Trial c(Vmax - Vall) = ∆Vcs 8 .5 * 100 - 99.22 = .39

  14. 1st Extinction Trial Trial c(Vmax - Vall) = ∆Vcs 1 .5 * 0 - 99.61 = -49.8

  15. 2nd Extinction Trial Trial c(Vmax - Vall) = ∆Vcs 2 .5 * 0 - 49.8 = -24.9

  16. Extinction Trials Trial c(Vmax - Vall) = ∆Vcs 3 .5 * 0 - 12.45 = -12.46 Trial c(Vmax - Vall) = ∆Vcs 4 .5 * 0 - 6.23 = -6.23 Trial c(Vmax - Vall) = ∆Vcs 5 .5 * 0 - 3.11 = -3.11 Trial c(Vmax - Vall) = ∆Vcs 6 .5 * 0 - 1.56 = -1.56

  17. Hypothetical Acquisition & Extinction Curves with c=.5 and Vmax = 100

  18. Acquisition & Extinction Curves with c=.5 vs. c=.2 (Vmax = 100)

  19. Theory Handles other Phenomena • Overshadowing • Whenever there are multiple stimuli or a compound stimulus, then Vall = Vcs1 + Vcs2 • Trial 1: • ∆Vnoise = .2 (100 – 0) = (.2)(100) = 20 • ∆Vlight = .3 (100 – 0) = (.3)(100) = 30 • Total Vall = current Vall + ∆Vnoise + ∆Vlight = 0 +20 +30 =50 • Trial 2: • ∆Vnoise = .2 (100 – 50) = (.2)(50) = 10 • ∆Vlight = .3 (100 – 50) = (.3)(50) = 15 • Total Vall = current Vall + ∆Vnoise + ∆Vlight = 50+10+15=75

  20. Theory Handles other Phenomena • Blocking • Clearly, the first 16 trials in Phase 1 will result in most of the Vmax accruing to the first CS, leaving very little Vmax available to the second CS in Phase 2 • Overexpectation Effect • When CSs trained separately (where both are close to Vmax) are then presented together you’ll actually get a decrease in associative strength

  21. Rescorla-Wagner Model • The theory is not perfect: • Can’t handle configural learning without a little tweaking • Can’t handle latent inhibition • But, it has been the “best” theory of Classical Conditioning

More Related