1 / 17

Speech Perception 2 DAY 17 – Oct 4, 2013

Speech Perception 2 DAY 17 – Oct 4, 2013. Brain & Language LING 4110-4890-5110-7960 NSCI 4110-4891-6110 Harry Howard Tulane University. Course organization. The syllabus, these slides and my recordings are available at http://www.tulane.edu/~howard/LING4110/ .

dunne
Download Presentation

Speech Perception 2 DAY 17 – Oct 4, 2013

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Speech Perception 2DAY 17 – Oct 4, 2013 Brain & Language LING 4110-4890-5110-7960 NSCI 4110-4891-6110 Harry Howard Tulane University

  2. Brain & Language, Harry Howard, Tulane University Course organization • The syllabus, these slides and my recordings are available at http://www.tulane.edu/~howard/LING4110/. • If you want to learn more about EEG and neurolinguistics, you are welcome to participate in my lab. This is also a good way to get started on an honor's thesis. • The grades are posted to Blackboard.

  3. Brain & Language, Harry Howard, Tulane University Review The quiz was the review.

  4. Brain & Language, Harry Howard, Tulane University Linguistic model, Fig. 2.1 p. 37 Discourse model Semantics Sentence level Syntax Sentence prosody Word level Morphology Word prosody Segmental phonology perception Segmental phonology production Acoustic phonetics Feature extraction Articulatory phonetics Speech motor control INPUT

  5. Brain & Language, Harry Howard, Tulane University Categorical perception The Clinton-Kennedy continuum Chinchillas do this too!

  6. Brain & Language, Harry Howard, Tulane University speech Perception Ingram §6

  7. Brain & Language, Harry Howard, Tulane University Category boundary shifts The shift in VOT is from ‘bin’ to ‘pin’: • Thus the phonetic feature detectors must compensate for the context –– because they know how speech is produced? But Japanese quail do this too.

  8. Brain & Language, Harry Howard, Tulane University Duplex speech (or perception) a A and B refer to either ear; B is also called the base

  9. Brain & Language, Harry Howard, Tulane University Results • Listeners hear a syllable in the ear that gets the base (B), but it is not ambiguous. Its identification is determined by which of the nine F3 transitions are presented to the other ear (A). • Listeners also hear a non-speech "chirp" in the ear that gets the isolated transition (A).

  10. Brain & Language, Harry Howard, Tulane University Implications • The fact that the same stimulus is simultaneously part of two quite distinct types of percepts argues that the percepts are produced by separate mechanisms that are both sensitive to the same range of stimuli. • The discrimination of the isolated "chirp" and the speech percept are quite different, despite the fact that the acoustic event responsible for both is the same. • The speech percept exhibits categorical perception; the chirp percept exhibits continuous perception. • If the intensity of the isolated transition is lowered below the threshold of hearing, so that listeners cannot tell reliably whether or not it is there on a given trial, it is still capable of disambiguating the speech percept. [HH: hold that thought]

  11. Brain & Language, Harry Howard, Tulane University Posterior research • Tried to control for the potential temporal delay of dichotic listening by manipulating the intensity (loudness) of the chirp with respect to the base. • Only if the chirp and the base have the same intensity are they perceived as a single speech sound.

  12. Brain & Language, Harry Howard, Tulane University Gokcen & Fox (2001)

  13. Brain & Language, Harry Howard, Tulane University Discussion • Even if the explanation for the latency differences is simply because linguistic and nonlinguistic components have two different areas in the brain to which they must go for processing, and coordinating these two processing sources in order to make an identification of a stimulus takes longer, the data would be consistent with the contention of separate modules for phonetic and auditory stimuli. • We would argue that these data do not support the claim that there is only a single unified cognitive module that processes all auditory information because the speech-only and duplex stimuli contained identical components and were equal in complexity.

  14. Brain & Language, Harry Howard, Tulane University Back to sine-wave speech What is this? It is this.

  15. Brain & Language, Harry Howard, Tulane University Dehaene-Lambertz et al. (2005) • … used ERP and fMRI to investigate sine-wave [ba]-[da] sounds. • For the EEG, the subjects had to be trained to hear the sound as speech. • In the MRI, most subjects heard the sound as speech immediately. • Switching to the speech mode significantly enhanced activation in the posterior parts of the left superior temporal sulcus.

  16. Brain & Language, Harry Howard, Tulane University Summary

  17. Brain & Language, Harry Howard, Tulane University NEXT TIME P5 Finish Ingram §6; start §7. ☞ Go over questions at end of chapter.

More Related