1 / 51

CHAPTER 9

CHAPTER 9. Aversive Control of Behavior: Punishment and Avoidance. Aversive Control of Behavior. Operant behavior is influenced by appetitive stimuli. Operant behavior can also be influenced by aversive stimuli. Operant behavior can: Produce aversive stimuli ( punishment )

fiona
Download Presentation

CHAPTER 9

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CHAPTER 9 Aversive Control of Behavior: Punishment and Avoidance

  2. Aversive Control of Behavior • Operant behavior is influenced by appetitive stimuli. • Operant behavior can also be influenced by aversive stimuli. • Operant behavior can: • Produce aversive stimuli (punishment) • Remove aversive stimuli (escape) • Prevent aversive stimuli (avoidance). • These three response-consequence relations have characteristic behavioral effects.

  3. Aversive Control of Behavior • Aversive stimuli can also affect operant behavior when that behavior neither produces nor prevents them--when aversive stimuli occur independently of responding. • Most famous example is conditioned emotional response.

  4. Conditioned Emotional Response • Rat must lever press to obtain food. • Rat receives periodic pairings of tone with electric shock. • Rat eventually press lever at a lower rate when tone is on than when it is off. • Phenomenon is called conditioned suppression or conditioned emotional response (CER; Estes & Skinner, 1941).

  5. Conditioned Emotional Response • Degree of suppression is measured by suppression ratio. • If rate during CS is B and rate in absence of CS is A, then suppression can be assessed with ratio: B/(A+B). • If CS has no effect, then ratio is 0.5. • If CS totally suppresses responding, then ratio is 0.0.

  6. Conditioned Emotional Response • With rats, food-reinforced lever presses may be suppressed in only a few trials. • In early trials, responding is suppressed in both presence and absence of CS. • Later, suppression is restricted to CS.

  7. PUNISHMENT • Punishment represents second side of Thorndike’s Law of Effect. • Reward increases likelihood of behavior that produces it. • Punishment decreases likelihood of behavior that produces it.

  8. PUNISHMENT • Is all we learned about positive reinforcement true, in mirror-image form, of punishment? • Perhaps not. • Any operant punishment situation is really a punishment plus reinforcement situation.

  9. PUNISHMENT • For punishment to suppress operant responding, responses must already be occurring with some frequency. • For responses to occur, they must be producing reinforcement. • So, effect of punishment reflects interaction of two contingencies--reinforcement and punishment. • They jointly operate in most situations.

  10. PUNISHMENT • Many factors influence effectiveness of punishment. • All testify to important role punishment plays in control of operant behavior.

  11. PUNISHMENT: EFFECTIVNESS • Punishing only reinforced response is often not an effective procedure. • If you give organism an alternative, unpunished route to reinforcement, then effects of punishment are enhanced.

  12. PUNISHMENT: EFFECTIVNESS • As intensity of punishing stimulus increases, degree of suppression increases. • If very intense shock is used, then suppression may be virtually complete.

  13. PUNISHMENT: EFFECTIVNESS • Suppressive effect of intermediate shock intensity depends on animal’s past experience with shock. • If animal has experienced intensities going from mild to intermediate, then there will be little suppression. • If animal has experienced intensities going from severe to intermediate, then there will be substantial suppression.

  14. PUNISHMENT: EFFECTIVNESS • For punishment to be maximally effective, it must immediately follow operant response. • As delay interval between response and punishment increases, amount of suppression decreases.

  15. PUNISHMENT: EFFECTIVNESS • Punishment should be certain and follow each operant response. • When responses are punished intermittently, effectiveness of punishment procedure is reduced.

  16. AVOIDANCE BEHAVIOR • Much of our daily activity avoids aversive stimuli that would otherwise occur if we did not behave appropriately. • Reinforcer for avoidance is a nonevent--absence of something bad. • Poses a great theoretical puzzle. • What sustains avoidance responding? • How can a nonevent be a reinforcer?

  17. AVOIDANCE BEHAVIOR • Two basic avoidance conditioning procedures: • Discrete-Trial Signaled Avoidance • Operant chamber • Shuttle box • Shock Postponement • Shock-Shock (S-S) interval • Response-Shock (R-S) interval • Contains no explicit signal for shock

  18. AVOIDANCE BEHAVIOR • Both procedures sustain effective avoidance behavior. • Big question is: What maintains avoidance? • We next move to this theoretical matter.

  19. THEORIES OF AVERSIVE CONTROL • Two-factor theory • Operant theory • Cognitive theory • Biological theory

  20. Two-factor theory • Most influential theory of aversive control. • Initially formulated by O. H. Mowrer. • Later elaborated by R. L. Solomon. • Views punishment and avoidance as products of both Pavlovian and operant conditioning.

  21. Two-factor theory • Consider discrete-trial, escape-avoidance procedure: • Tone is presented and followed by shock. • Animal initially learns to escape shock. • Reinforcer for escape responding is shock termination.

  22. Two-factor theory • While escape is occurring, Pavlovian conditioning is also occurring. • On each trial, tone (CS) is paired with shock (US). • After a number of trials, tone should elicit fear just as shock. • Animal may now make escape response and end fear-provoking CS.

  23. Two-factor theory • But, escape from CS is avoidance of US. • So, two-factor theory suggests that avoidance is not really avoidance at all. • Rather, it is escape from a stimulus that, through pairing with shock, has become fear provoking.

  24. Two-factor theory • Two-factor theory cleverly avoids problem of having a nonevent (absence of shock) maintain avoidance. • Termination of CS, not absence of shock, maintains avoidance. • Because escape is crucial to successful avoidance, two-factor theory holds that both Pavlovian and operant factors influence and maintain avoidance.

  25. Two-factor theory • Two-factor theory nicely explains discrete-trial signaled avoidance. • What about shock-postponement procedure? • Here, there is no obvious external signal for shock. • But, passage of time might become a CS and elicit fear CR.

  26. Two-factor theory • Fear could be conditioned to time after last response. • Just after a response, fear should be low or nonexistent; as time passes without a response, fear should grow. • When fear is sufficiently intense, response should occur, and organism should escape fear; organism should also avoid shock as a happy by-product.

  27. Two-factor theory • Animals do not respond randomly in time on shock-postponement tasks. • Rather, likelihood of response increases as time since last response increases. • Supports two-factor theory of avoidance in shock-postponement tasks.

  28. Two-factor theory • In addition, animals do learn to fear CS in standard discrete-trial procedure. • Animals learn to make response that escapes CS. • Furthermore, CS suppresses lever pressing for food in CER situation. • Finally, CS+ for shock increases animal’s rate of avoidance responding, whereas CS- decreases response rate.

  29. Two-factor theory • Despite supportive results, there have been challenges. • Herrnstein-Hineline procedure poses one such challenge. • On this procedure, rats learn to lever press if consequence of lever pressing is a reduction in overall frequency of shocks--not necessarily to zero.

  30. Two-factor theory • In addition, fear CRs are not reliably observed in avoidance experiments. • Sometimes they are observed; sometimes they are not. • Sometimes when they are observed, they occur at wrong time.

  31. Two-factor theory • More significantly, when shocker is disconnected, animals often continue to make avoidance responses for hundreds of trials. • Shouldn’t fear extinguish long before avoidance response ceases? • Conclusion: case for two-factor theory is mixed.

  32. Operant theory • Just as reinforcement increases rate of responding, perhaps punishment decreases rate of responding. • Key premise of operant theory. • Key method behind operant theory is Herrnstein-Hineline procedure.

  33. Operant theory • One timer delivered shocks at average rate of 6 per minute; other delivered shocks at average rate of 3 per minute. • For both timers, interval between shocks was random; a shock was just as likely 2 seconds after last shock as it was 2 minutes after last shock.

  34. Operant theory • Only one timer at a time ran; animal’s responding determined which. • If no press, 6 shock per min timer ran; if press, 3 shock per min timer ran. • So, pressing shifted from timer that delivered shocks at higher rate to timer that delivered shocks at lower rate. • Each response activated low-rate timer until next shock occurred.

  35. Operant theory • Despite such subtle consequences, 19/20 rats learned to respond reliably. • This result was seen as evidence that sufficient condition for avoidance learning is reduction in shock frequency.

  36. Operant theory • But, on average, lever pressing was followed by longer shock-free time than any other responses rats perform. • Perhaps this feature of method, rather than shock frequency reduction, was responsible for avoidance learning.

  37. Operant theory • To find out, Hineline (1970) trained rats on procedure in which presses delayed inevitable shocks. • Pressing within 2-sec opportunity temporally relocated shock that would have occurred 2 sec into 20-sec trial to 18-sec into trial.

  38. Operant theory • Even though shock occurred once in every 20-sec trial, rats learned to press. • More amazingly, Hineline (1977) found that lever pressing established by shock delay could be maintained even when lever pressing increased number of shocks received.

  39. Operant theory • So, shock delay may reinforce operant behavior to such a degree that it overrides positive contingency between operant responding and shock delivery. • This positive contingency should have punished rats’ lever pressing.

  40. Operant theory • Despite these interesting results, operant theory is not considered a rich account of all known facts of avoidance. • It fails to account for: • Extinction of avoidance behavior. • Effect of CSs on avoidance behavior.

  41. Cognitive theory • Essentially an effort to formalize intuitive account of avoidance. • Based on what an organism knows, expects, and desires. • Theory has cognitive and emotional premises.

  42. Cognitive theory • Cognitive premises: • Animal prefers no shock to shock. • Animal expects no shock if it responds. • Animal expects shock if it doesn’t respond. • Expectancies are strengthened when they are confirmed and weakened when they are disconfirmed. • Probability of avoidance response increases as confirmation of second and third expectancies increases.

  43. Cognitive theory • Emotional premises: • Fear conditioned to CS paired with shock. • Fear extinguished when CS not paired with shock.

  44. Cognitive theory • Advocates of cognitive theory are pleased that intuitions can be put into coherent account of avoidance. • Critics point out empirical shortcomings of cognitive theory. • One concerns effects of superimposed CSs on avoidance responding; should have no influence if animals learn that responding has no effect during CS-US training.

  45. Biological theory • Final theory is not a complete account. • Rather, it is an approach to avoidance learning that focuses on: • Repertoire of defensive responses with which members of different species are endowed. • Relation between these responses and those required in laboratory.

  46. Biological theory • Each species has set of built-in defensive responses: species-specific defense reactions or SSDRs. • Common SSDRs include freezing, attacking, and fleeing. • When danger develops, organism will make one of its SSDRs. • If this response eliminates danger, then all is well.

  47. Biological theory • If not, then animal will make another SSDR, and another, until one succeeds. • Only when all of animal’s defensive repertoire has been sampled and has been proven to be ineffective will non-SSDRs occur.

  48. Biological theory • What determines speed of avoidance learning is particular response animal must make. • If response resembles an SSDR, then animal will learn quickly. • If response does not resemble an SSDR, then animal will learn slowly or not at all.

  49. Biological theory • For rat, some avoidance responses may be learned in one or two trials; example is jumping out of box. • Other avoidance responses may require hundreds of trials for acquisition; example is familiar lever press. • These acquisition data support key notion of biological theory.

  50. Biological theory • Appreciating interplay between biology and experience is important. • But, biological theory says little about avoidance behavior after acquisition. • Biological theorists, like Robert Bolles, acknowledge this point, but say that speed of learning is most critical result for an animal’s survival.

More Related