1 / 1

SUMMARY

INTRODUCTION

dezso
Download Presentation

SUMMARY

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. INTRODUCTION • Cochlear implant listeners can achieve scores of 70% - 80% in quiet but are particularly challenged by understanding speech in noise. Similarly, normal-hearing, non-native listeners do well recognizing speech in quiet, but face the same degree of difficulty as cochlear implant listeners understanding speech in noise (Cutler et al., 2004). These findings suggest that two mechanisms are responsible for imperfect speech perception: A peripheral mechanism, which is responsible for the hearing impaired listeners’ difficulty, and a central mechanisms which is responsible for the non-native listeners’ difficulty. • The poor speech perception in noise in cochlear implant listeners is primarily a result of their inability to encode pitch due to the limited number of electrodes that can be safely placed in the cochlea and the pitch mismatch that occurs from the imperfect electrode to frequency mapping (Shannon et al., 2001). Normal-hearing listeners using cochlear implant simulations also demand higher signal to noise ratios to achieve the same scores. (Dorman et al., 1998; Fu et al., 1998). • While hearing-impaired individuals have trouble retrieving the speech signal, non-native listeners have trouble interpreting it. Non-native speech perception is subject to a specific type of interference between the first language and the second language. Results from previous studies of non-native listeners’ speech perception are mixed regardless of stimuli or subject type (sentences, word, and phonetic tests). • Because a significant portion of CI users uses another language in everyday listening situations, assessing the linguistic and audiological contributions to cochlear implant speech perception is of clinical significance. • Non-native listeners were predicted to show a weaker degraded speech effect on phonemic tests than on sentence tests. Cochlear implant users, on the other hand, have demonstrated no such effect on phonemic vs. sentence tests (Stickney et al., 2005; Bhattacharya and Zeng, 2005). This general pattern was observed in the present study. • METHODS • PARTICIPANTS: Seven non-native speakers of English described in Table 1 below, and 6 native English speakers. Non-native subjects are bilingual in English and another language but acquired English after adolescence. Preliminary data from bilingual subjects exposed to English and another language from infancy show that performance is the same for monolingual English speakers. • STIMULI: Two types of stimuli—phonemes and sentences--were used to assess two types of linguistic functions: acoustic-phonetic identification and segmentation of continuous speech into words. 20 consonants were tested in a/Ca combinations. 11 vowels were tested in h/V/d combinations. Subjects heard the consonant or vowel in the given sequence and then selected the correct response from the group. IEEE sentences were used for the sentence condition. • DISTORTIONS: Two types of distortions were introduced in the stimuli. The stimuli were either presented in quiet or in speech-spectrum-shaped noise at +20, +10, 0, and –10 signal-to-noise ratios (SNR) or processed to simulate a 2, 4, 8, 16, or 32-channel cochlear implant. • PROCEDURE: Stimuli were presented binaurally through headphones with subjects seated in an IAC sound booth. For the sentence task, subjects typed their response using the computer keyboard and were encouraged to ask for spelling or guess if unsure. Subjects’ responses were collected and automatically scored by the percentage of keywords correct. For the phoneme identification task, subjects heard a phoneme configuration and selected configuration. • TABLE I NON-NATIVE LISTENER DEMOGRAPHICS FIGURE 1: Cochlear Implant, Non-native and Native Listener Scores IEEE sentences in Quiet, -10, 0 , and +10 Signal to Noise Ratio FIGURE 2: Non-native and Native Listener Scores for IEEE sentences Using the Cochlear Implant Simulation Channel Condition (32, 16, 8, 4, and 2 channels) FIGURE 3: Non-native and Native Listener Scores for Consonants in Quiet, -10, 0, and +10 Signal to Noise Ratio • FIGURE 4: Non-native and Native Listener Scores for Consonants • Using the Cochlear Implant Simulation Channel Condition • (32, 16, 8, 4, and 2 Channels) • Figure 5: Non-native and Native Listener Scores for Vowels • In Quiet , +10, 0, and –10 Signal to Noise Ratio • Figure 6: Non-native and Native Listener Scores for Vowels • Using the Cochlear Implant Simulation Channel Condition • (32, 16, 8, 4, 2 Channels) • RESULTS • Speech recognition for sentences in noise and cochlear implant simulation • Figure 1 shows that performance increased as a function of signal to noise ratio (SNR). Non-native and native listeners scores from the natural, +10, 0 and –10 SNR. Scores for native listener subjects declined 7 percentage points (98 - > 91) from the quiet to the 0 SNR condition while scores for non-native listener subjects declined 30 percentage points (83 ->53). The difference between the native and non-native listener populations for the decline from quiet to 0 SNR was 23 percentage points, indicating that SNR more adversely affects non-native listeners than native listeners. • Figure 2 shows similar results compared to Figure 1. Performance increased as a function of channel number (32 16, 8, 4, 2) for both non-native and native listeners. Performance for native listeners was asymptotic at 8 channels and for non-native listeners, the mean average percentage achieved for the highest number of channels, 32, was 84%. Scores declined sharply for both subject groups from 8 to 4 channels. For the native listener group, scores decreased from 98% (8 channels) to 75% (4 channels), 23 percentage point difference. For the non-native listener group, scores declined from 73% (8 channels) to 36%, a 36-percentage point difference. A 13-percentage point difference was observed in this decline. • B. Speech recognition of vowels and consonants • Figures 3 through 6 shows that non-native and native listeners were affected similarly by degraded speech in the vowel and consonant conditions. Overall, native listener performance was slightly higher than non-native listener performance on both tasks. SUMMARY • Compared with normal-hearing native-English listeners, both normal-hearing non-native and cochlear-implant subjects had significantly degraded performance for sentence recognition in noise . • Native and non-native subjects showed similar performance in consonant and vowel recognition, but the cochlear-implant subjects performed significantly worse on these tasks. • The present results suggest that two different mechanisms lead to sentence recognition in noise: (1) a peripheral mechanism reflects distorted input or acoustic representation of alltypes of speech stimuli as experienced by cochlear-implant subjects and (2) a central mechanism reflects linguistic interference or lack of linguistic knowledge on the sentence level as experienced by non-native, normal-hearing subjects. • Thus far, non-native listener performance does not show a dependence on demographic variables such as native language, age, years in the US, years speaking English, or formal education in English. • These results suggest that non-native listeners use a “bottom up” approach to sentence processing. These findings are similar to what Bradlow and Pisoni (1999) observed: Relative to native listeners, non-native listeners experienced more difficulty with lexically hard words even when familiarity with these items was controlled, indicating that non-native word identification is compromised when phonetic differentiation is required. If non-natives have more difficulty at the phonetic level and are not able to fill in segments based on context or previous knowledge, it seems more likely that degraded listening conditions would further impinge on their ability to parse recognize words in sentences. • Results suggest that sentence tasks will be particularly difficult for CI users who use a second language that was acquired in adulthood. Expecting such outcomes allows for audiologists to develop clinical protocols that distinguish between phoneme and sentence tasks. • ACKNOWLEDGEMENTS • We are very grateful for the time and dedication our non-native and native listeners have given to this study. This work is supported in part by NIH grant DC002267. • REFERENCES • Bhattacharya, A., and Zeng, F.G. (2005). Companding to improve cochlear implants’ speech processing in noise. Poster Presentation, Asilomar, California. • Bradlow A.R., and Pisoni, D.B. (1999). Recognition of spoken words by native and non-native listeners: talker-, listener-, and item-related factors. J Acoust Soc Am. 1999 Vo. 106(4 Pt 1):2074-85. • Cutler, A., Weber, A., Smits, R., and Cooper, N, (2004). Patterns of English phoneme confusions by native and non-native Listeners. J .Acoust. Soc. Am. Vol. 116(6), 3668-3678. • Dorman, M.F., Loizou, P.C., and Tu, Z. (1998). The recognition of sentences in nise by normal-hearing listeners using simulations of cochlear implant signal processor with 6-20 channel. J .Acoust. Soc. Am. Vol(104), 3583-3585. • Fu, Q.J., Shannon, R.V., and Wang, X. (1998). Effects of noise and spectral resolution on vowel and consonant recognition: Acoustic and electric hearing. J. Acoust. Soc. Am. Vol. 104, 3586-3596. • Shannon, R.V., Galvin, III,, J.J., and Baskent, D.( 2001). Holes in Hearing. J. Assoc. res. Oto 3, 185-199. • Stickney, G., Zeng, F.G., Litovsky, R., and Assmann, P. (2004). Cochlear implant speech recognition with speech maskers. J. Acoust. Soc. Am. Linguistic and Audiological Factors in Cochlear Implant Speech PerceptionMichelle AuCoin McGuire, Jeff Carroll, Fan-Gang Zeng n = 5 n = 5

More Related