Version WS 2007-8. Speech Science IX. How is articulation organized?. Articulatory states vs. articulatory gestures . Speech sound description is based on the positions or states of the articulators not on their movements
Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.
P.-M. 1.4,8. pp. 64-78 (Steuerung)
a) One acoustic event (speech sound) can result from different articulatory configurations
(= “articulatory compensation”)
b) One articulatory configuration can result from different patterns of muscular activity
(= “neuromuscular compensation”)
• Another well-known example of articulatory compensation is the „American /r/“ ()
The tongue may be a) turned back (retroflex) b) bunched
• Lip-rounded vowels (like [y]) can be produced with strongly rounded, protruded lips, or with retracted tongue and neutral (or even spread) lips (with or without a lowered larynx).
• Articulatory differences (requiring different commands to muscles) are not only the result of having acquired a particular variant.
• Sounds occur in context, and the gestures are differentwith every different preceding sound!
• This makes the relationship between one speech sound (in linguistic terms “a phoneme”)and the commands to produce it complicated.
• The motor activity involved in producing speech sounds is much more complex than the (relatively simple) phonetic- phonological categorisation of speech sounds
• We have to decide whether there is (or can be) any link between linguistic description and production models
…. It would be unfortunate if we had to say that the two had nothing to do with each other!
• The acoustic ( perceptual) identity of sounds seems more important than the motor equivalence
• When we learn to articulate, we match our own production to what we hear.
The acoustic patterns from other speakers are our only models …
…… nobody shows us how to move our lips, tongue, velum or larynx …….
• There seems to be an innate ability to imitate peoples‘ facial expressions (attributed to so-called “mirror neurons”)
This has been systematically observed in very young babies, who mimic their mother‘s expressions.
• So there may be some visual input as well as the predominant acoustic input to the speech learning process.
But only a small fraction of the articulatory activity is visible/observable.
• Theories of speech production do not always model articulation from a perceptual standpoint.
• Linguistic (phonological) models of sound systems are concerned with the patterns of sound produced, not with the processes that are required to produce them (BHR p. 152 f.)
The IPA system is articulatorily orientated.Distinctive Feature theory (more abstract) can be articulatory or acoustic.
• Continuation of the discussion in Script X.
• James Perkell (2000): A Theory of speech motor control and supporting data from speakers with normal hearing and with profound hearing loss.Journal of Phonetics 28, 233-272
Article for copying in room 4.11 (Phonetics Secretary‘s Office)