1 / 21

The Neuroscience of Language

The Neuroscience of Language. What is language? What is it for?. Rapid efficient communication (as such, other kinds of communication might be called language for our purposes and might share underlying neural mechanisms) Two broad but interacting domains: Comprehension Production.

opa
Download Presentation

The Neuroscience of Language

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The Neuroscience of Language

  2. What is language? What is it for? • Rapid efficient communication • (as such, other kinds of communication might be called language for our purposes and might share underlying neural mechanisms) • Two broad but interacting domains: • Comprehension • Production

  3. Speech comprehension • Is an auditory task • (but stay tuned for the McGurk Effect!) • Is also a selective attention task • Auditory scene analysis • Is a temporal task • We need a way to represent both frequency (pitch) and time when talking about language -> the speech spectrogram

  4. Speech comprehension • Is also a selective attention task • Auditory scene analysis • Which streams of sound constiutute speech? • Which one stream constitutes the to-be-comprehended speech • Not a trivial problem because sound waves combine prior to reaching the ear

  5. Speech comprehension • Is a temporal task • Speech is a time-varying signal • It is meaningless to freeze a word in time (like you can do with an image) • We need a way to consider both frequency (pitch) and time when talking about language -> the speech spectrogram

  6. What forms the basis of spoken language? • Phonemes • Phonemes strung together over time with prosody

  7. What forms the basis of spoken language? • Phonemes = smallest perceptual unit of sound • Phonemes strung together over time with prosody

  8. What forms the basis of spoken language? • Phonemes = smallest perceptual unit of sound • Phonemes strung together over time with prosody = the variation of pitch and loudness over the time scale of a whole sentence

  9. What forms the basis of spoken language? • Phonemes = smallest perceptual unit of sound • Phonemes strung together over time with prosody = the variation of pitch and loudness over the time scale of a whole sentence To visualize these we need slick acoustic analysis software…which I’ve got

  10. What forms the basis of spoken language? • The auditory system is inherently tonotopic

  11. Is speech comprehension therefore an image matching problem? • If your brain could just match the picture on the basilar membrane with a lexical object in memory, speech would be comprehended

  12. Problems facing the brain • Acoustic - Phonetic invariance • says that phonemes should match one and only one pattern in the spectrogram • This is not the case! For example /d/ followed by different vowels:

  13. Problems facing the brain • The Segmentation Problem: • The stream of acoustic input is not physically segmented into discrete phonemes, words, phrases, etc. • Silent gaps don’t always indicate (aren’t perceived as) interruptions in speech

  14. Problems facing the brain • The Segmentation Problem: • The stream of acoustic input is not physically segmented into discrete phonemes, words, phrases, etc. • Continuous speech stream is sometimes perceived as having gaps

  15. How (where) does the brain solve these problems? • Note that the brain can’t know that incoming sound is speech until it first figures out that it isn’t !? • Signal chain goes from non-specific -> specific • Neuroimaging has to take the same approach to track down speech-specific regions

  16. Functional Anatomy ofSpeech Comprehension • low-level auditory pathway is not specialized for speech sounds • Both speech and non-speech sounds activate primary auditory cortex (bilateral Heschl’sGyrus) on the top of the superior temporal gyrus

  17. Functional Anatomy of Speech Comprehension • Which parts of the auditory pathway are specialized for speech? • Binder et al. (2000) • fMRI • Presented several kinds of stimuli: • white noise • pure tones • non-words • reversed words • real words These have non-word-like acoustical properties These have word-like acoustical properties but no lexical associations word-like acoustical properties and lexical associations

  18. Functional Anatomy of Speech Comprehension • Relative to “baseline” scanner noise • Widespread auditory cortex activation (bilaterally) for all stimuli • Why isn’t this surprising?

  19. Functional Anatomy of Speech Comprehension • Statistical contrasts reveal specialization for speech-like sounds • superior temporal gyrus • Somewhat more prominent on left side

  20. Functional Anatomy of Speech Comprehension • Further highly sensitive contrasts to identify specialization for words relative to other speech-like sounds revealed only a few small clusters of voxels • Brodmann areas • Area 39 • 20, 21 and 37 • 46 and 10

  21. Next time we’ll discuss • Speech production • Aphasia • Lateralization

More Related