Infants’ Discrimination of Speech and Faces: Testing the Predictions of the Intersensory Redundacy Hypothesis Mariana C. Wehrhahn and Lorraine E. Bahrick Florida International University. Results & Conclusions
Results & Conclusions
As can be seen in Figure 2, for the rhythm test, significant visual recovery (two-tailed) to the novel stimulus was found in the bimodal condition (n= 11, p= .054, M= 7.52, SD= 11.42) but not in the unimodal condition (n= 14, p= .345, M= 2.97, SD= 11.35). For the face test there was no significant visual recovery (two-tailed) in the bimodal condition (n= 11, p= .411, M= 3.04, SD= 11.14), however, significant results were found for the unimodal group (n= 14, p= .049, M= 6.32, SD= 9.88).
Findings thus far indicate that infants at 2.5 months of age are able to detect a change in rhythm of speech only when the speech was presented across two sense modalities. Further, 2.5 month-old infants can discriminate between two faces when the presentation is unimodal but not when it is bimodal. Results show that the predictions of the IRH can be extended to social events that infants encounter on a daily basis, such as the perception of speech and faces in the context of speech. These early processing biases are likely to influence later perception and learning.
Bahrick, L.E., & Lickliter, R. (2000). Intersensory redundancy guides attentional selectivity and
perceptual learning in infancy. Developmental Psychology, 36, 190-201.
Bahrick, L. E., & Lickliter, R. (2002). Intersensory redundancy guides early perceptual and cognitive
development. In R. Kail (Ed.), Advances in Child Development and Behavior (pp. 153-187). New
York: Academic Press.
Bahrick, L. E., Lickliter, R., Vaillant, M., Schuman, M., & Castellanos, I. (May, 2004). Infant
discrimination of faces in the context of dynamic multimodal events: Predictions of the Intersensory
According to the intersensory redundancy hypothesis (IRH) infants’ perception of amodal information (information that can be conveyed by two or more sense modalities) is facilitated in bimodal presentations. It was found that infants can detect changes in rhythms and tempos when presented bimodally (audiovisually) but not when presented unimodally (audio or visual) (Bahrick & Lickliter, 2000, 2002). However, modality specific information (information that can be perceived only through one sense modality) is better perceived when presented unimodally as opposed to bimodally. The present study investigates whether the predictions of the IRH apply to the discrimination of rhythms in speech, which relies on amodal information, and the discrimination of faces, which relies on modality specific information. Twenty-five infants were habituated to one of two rhythms of a nursery rhyme recited by a woman on a videotape. They were first tested to determine whether they could detect a change in the rhythm of speech and then whether they could detect a change in the face of the speaker. Visual recovery to the novel stimuli was measured. Results supported the IRH and indicate that infants detected the change in the rhythm of speech only when the presentation was bimodal (audiovisual), whereas they detected the change of face only when the presentation was unimodal.
According to the intersensory redundancy hypothesis (IRH), when events are presented synchronously and across two or more sense modalities, certain properties “pop out” and are highly salient as compared with other properties. These properties are called “amodal” properties and include rhythm, intonation, duration and tempo (Bahrick & Lickliter, 2000, 2002). The IRH predicts that in early development “amodal properties” are better perceived when presented across more than one sense modality. It has been found that 5 month-old infants could detect a change in a complex rhythm of a hammer tapping on a surface when it was presented across more than two sense modalities but not when presented to only one sense modality (Bahrick & Lickliter, 2000). Further, the IRH predicts that modality specific information, meaning information available to only one sense modality (e.g. color, pattern, facial configuration) is better perceived when the presentation is unimodal. Face discrimination is an example which requires modality specific information. A study involving 2 month-old infants found that face discrimination is facilitated when the face is presented unimodally (Bahrick, Lickliter, Vaillant, Schuman, & Castellanos, 2004). The present study investigates whether the predictions of the IRH apply to the perception of the rhythm of speech (amodal information) and the discrimination of individual faces in the context of speech (modality-specific information). Specifically, it is being tested whether 2.5 month-old infants can detect a change in the rhythm of speech of a nursery rhyme with bimodal (audiovisual) or unimodal (visual) presentations. Further, it is examined whether 2.5 month-old infants are able to discriminate a change in the face of the speaker when presented bimodally or unimodally. The bimodal presentation is expected to direct infants’ attention to the rhythm of the speech, facilitating the detection of a change in rhythm. In contrast, under the unimodal visual presentation infants should not be able to detect a change in the rhythm of speech. However, detection of a face change should be facilitated by unimodal presentations, as opposed to bimodal presentations.
Video films showing one of two women reciting the nursery rhyme “Mary had a little lamb” in one of two different rhythms were created. All of the infants were habituated, using an infant-controlled procedure to one of two women reciting the rhyme in one of two rhythms of speech. Half of the infants were assigned to the bimodal (audiovisual) condition, and the other half to the unimodal (visual, silent) condition. Two tests followed habituation; a rhythm change test and a face change test. In the rhythm test, test trials differed from habituation trials only in the rhythm of speech and in the face test, test trials differed only in the face of the speaker, as shown in Figure 1. Visual recovery to the novel stimuli was measured. It was predicted that a change in rhythm of speech would be best discriminated in the bimodal condition (since this is an amodal change), whereas detection of a change in face would be best in the unimodal condition (since this requires modality specific information).
Visual Recovery (seconds)
* p < .05
Rhythm Change Face Change
Presented at The First Annual South Florida Undergraduate Psychology Conference at St. Thomas (Hosted by Psi Chi, The National Honor Society in Psychology), April, 2004, Miami, FL. Requests for reprints should be sent to the first author at [email protected]