1 / 22

Pragmatically-guided perceptual learning

Pragmatically-guided perceptual learning. Tanya Kraljic, Arty Samuel, Susan Brennan Adaptation Project mini-Conference, May 7, 2007. 1-Minute Background on Speech Perception Part 1: Perceptual constancy. Speaker. Listener. Speech sounds (phonemes) differ depending on: who is speaking

azra
Download Presentation

Pragmatically-guided perceptual learning

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Pragmatically-guided perceptual learning Tanya Kraljic, Arty Samuel, Susan Brennan Adaptation Project mini-Conference, May 7, 2007

  2. 1-Minute Background on Speech Perception Part 1: Perceptual constancy Speaker Listener • Speech sounds (phonemes) differ depending on: • who is speaking • what the immediate phonetic context is

  3. And Yet… Speaker Listener Perceptual constancy • Speech sounds (phonemes) differ depending on: • who is speaking • what the immediate phonetic context is

  4. 1-Minute Background on Speech Perception Part 2: Solutions? Speaker Listener 1. Learn the acoustic invariants as children, then extract those and discard everything else as we’re listening Problem: What acoustic invariants?

  5. 1-Minute Background on Speech Perception Part 2: Solutions Speaker Listener 1. Learn the acoustic invariants as children, then extract those and discard everything else as we’re listening Problem: What acoustic invariants? 2. Represent (learn) every variation that is encountered Problem: memory (if every variant is stored separately), ‘catastrophic interference’ (if you keep changing the same representation)

  6. Getting at the Question: How does the perceptual system decide what to learn? General idea in perception: Maybe the system tries to learn invariants of the distal objects that produce the stimuli (in this case, that would mean the speaker) and not of the stimuli themselves (in this case, the acoustic signal) Our hypothesis: Maybe the system tries to learn those aspects of the signal that reflect characteristic properties of the speaker (and therefore are likely to remain stable across contexts and situations)

  7. Getting at the Question: How does the perceptual system decide what to learn? Specifically: How might it determine which variations are characteristic? Our test: two kinds of information the system might use: 1. A ‘first impressions’ heuristic: In the absence of any other information, the properties that are present during first encounter are assumed to be representative and stable 2. Pragmatic cues that indicate that the variation is incidental (seeing that the speaker is talking with a pen in her mouth) can override the influence of primacy

  8. What does Perceptual learning look like? 2-phase Method 1. Exposure Phase (Lexical Decision Task) Purpose: To expose participants to a speaker who pronounces a particular sound in an ambiguous way (e.g., /?sS/) Method: The /?sS/ occurs in the context of words that cause the sound to be perceived as one or the other phoneme (e.g. dino?aur OR impa?ent). Example: dino?aur OR impa?ent

  9. What does Perceptual learning look like? 2-phase Method 1. Exposure Phase (Lexical Decision Task) Purpose: To expose participants to a speaker who pronounces a particular sound in an ambiguous way (e.g., /?sS/) Method: The /?sS/ occurs in the context of words that cause the sound to be perceived as one or the other phoneme (e.g. dino?aur OR impa?ent). * Listeners hear both ‘odd’ (dino?aur) and good versions of the (legacy) phonemes from the same speaker * 2. Test Phase (Category Identification) Purpose: Tests whether perceptual learning has occurred Method: Participants hear items from a continuum that ranges from /s/ to (/S/), with several ambiguous points in between. They have to label each sound as S or SH.

  10. Manipulation: 2X2 *All manipulations are during the Exposure phase* Modality (Audio Only, AudioVisual) X Pronunciation attribute (Characteristic, Incidental) (really X another 2 - Phoneme: ?S or ?SH)

  11. Manipulation: 2X2 *All manipulations are during the Exposure phase* Modality (Audio Only, AudioVisual) X Pronunciation attribute (Characteristic, Incidental) (really X another 2 - Phoneme: ?S or ?SH)

  12. Manipulation: 2X2 *All manipulations are during the Exposure phase* Modality (Audio Only, AudioVisual)X Pronunciation attribute (Characteristic, Incidental) (really X another 2 - Phoneme: ?S or ?SH)

  13. Manipulation: 2X2 *All manipulations are during the Exposure phase* Modality (Audio Only, AudioVisual) X Pronunciation attribute (Characteristic, Incidental) (really X another 2 - Phoneme: ?S or ?SH) Pronunciation attribute varies by modality: AudioOnly modality = Order manipulation (to test ‘first impressions heuristic) Order1st half2nd halfAttributionPrediction Odd 1st dino?aur legacy Characteristic learning Odd 2nd legacy dino?aur Incidental no learning

  14. Results: Audio Modality Odd First Perceptual learning (F(1,62)=5.93, p=.018) Odd Second No Perceptual learning (F(1,62)=.29, p=.59 /s/ /?sS/ /S/ /s/ /?sS/ /S/

  15. Manipulation: 2X2 *All manipulations are during the Exposure phase* Modality (Audio Only, AudioVisual) X Pronunciation attribute (Characteristic, Incidental) (really X another 2 - Phoneme: ?S or ?SH) Pronunciation attribute varies by modality: AudioVisual modality = Pragmatic manipulation (can it override ‘first impressions’ heuristic?) Pragmatic OrderAttributionPrediction No pen in mouth* odd first Characteristic learning Pen in mouth odd first Incidental no learning *No pen in mouth condition is just an AV version of our Audio, Odd-first condition

  16. Manipulation: 2X2 Example of manipulation: No pen in mouth Pen in mouth

  17. Results: AudioVisual Modality No Pen in Mouth Perceptual learning (F(1,68)=6.29, p=.015) Pen in Mouth No Perceptual learning (F(1,68)=.04, p>.05 /s/ /?sS/ /S/ /s/ /?sS/ /S/

  18. Overall results / Conclusions Results:Same acoustic signal is handled differently depending on whether it is assumed to be a characteristic pronunciation or an incidental (perhaps transient) one Main effect of phoneme (SH vs. S), no interaction with modality, significant interaction with Pronunciation attribute.

  19. Overall results / Conclusions Converging Evidence: Our work on idiolectal/dialectal STR shows learning for ?sS when it is speaker-driven, but not when it is contextually-driven Conclusion: Perceptual learning is a powerful mechanism applied conservatively. Pragmatic information plays an immediate role in guiding learning

  20. Thank you

  21. Design Elaboration ?SH ?S Audio AudioVisual Audio AudioVisual odd 1st odd 2nd odd 1st odd 2nd

  22. Design Elaboration ?SH ?S Audio AudioVisual Audio AudioVisual No Pen Pen No Pen Pen odd 1st odd 2nd odd 1st odd 2nd

More Related