1 / 26

Hearing Aids and Hearing Impairments Part II

Hearing Aids and Hearing Impairments Part II. Meena Ramani 02/23/05. Discussion Time!. Summarize. Room for improvement Huge Market. BTE,ITE,ITC,CIC. Facts on Hearing Loss Hearing Aids Cochlea-IHC and OHC Presbycusis Decreased Audibility Decreased Frequency Resolution

topper
Download Presentation

Hearing Aids and Hearing Impairments Part II

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Hearing Aids and Hearing Impairments Part II Meena Ramani 02/23/05

  2. Discussion Time!

  3. Summarize Room for improvement Huge Market BTE,ITE,ITC,CIC • Facts on Hearing Loss • Hearing Aids • Cochlea-IHC and OHC • Presbycusis • Decreased Audibility • Decreased Frequency Resolution • Decreased Temporal resolution • Decreased Dynamic Range • Amplification Techniques • Linear • Compressive-Single/MultiBand • OHCs: • Sharpen the traveling wave • Provide an amplification for soft sounds(40-50 dB SPL) HL in aging ears Occurs due to damage to OHCs • 1) HA has to provide more gain at HFs. • 2) HAs less gain at LFs • Noise Removal • 3) Fast acting compression • 4) Compressive Amplification Linear-too much gain Compressive- Overshoots/undershoots Multiband vs Singleband

  4. Outline • Temporal Resolution • Frequency Resolution • Noise Reduction Techniques • Conclusion

  5. Temporal Resolution • What is temporal resolution? • What happens to temporal resolution for the HI? • What does poor temporal resolution result in? • Implications for HA design

  6. What is temporal resolution? Speech has a lot of temporal information like the presence or absence of acoustic excitation, the periodicity or aperiodicity of excitation _______, ________,______ • SpeechEnvelope • Slowly varying • Carries Information: Consonants, voicing, phoneme boundaries, syllable boundaries, stress etc. • Lip reading and speech envelope • ModulationPerception • Changing the depth of modulation of the envelope • Noise and Reverberations • Gapdetectionthreshold • Psychoacoustic measure • For normals: 2.5ms • Relationship between gap detection thresholds and SRTs in noise (Consonant recognition requires temporal structures)

  7. Temporal resolution for the HI • Experimental setup for Modulation Perception: • TMTF- Temporal Modulation Transfer Function • Sinusoidal modulation of broadband noise • Modulation detection threshold in terms of freq. • Comparison with normals: threshold shift /SL • Results : • Poor modulation perception is because of reduced listening bandwidth • Same behavior at SL inputs for normals and HI • Results for Gap detection measures: • Normals: GDT reduces as the frequency of the noise bands increases • Same behavior at SL inputs for normals and HI • Signals which are made audible to HI have same temporal resolution as the normals

  8. Temporal resolution for the HI (contd.) • Difference in loudness levels between the envelope maxima and minima is > for impaired ear than normal • This leads one to assume that impaired ear will perceive modulation depth changes better perceptually/Louder • Circles->equal modulation strength. • Contradicts the TMTF results? • JND vs Perception • Noise also enhanced<Glasberg>

  9. Temporal resolution for the HI (contd.)Effect of Compression on Modulation • Use compression to provide loudness correction. • Fast acting/Syllabic compression • 3:1 compression • Modulation depth (dB)=20logm • AM factor: (1+msin(wmt)) • Reduces the modulation depth by~9.5dB

  10. Implications for HA design • Syllabic compression can compensate for abnormal sensitivity to AM • This compression also improves the discrimination of envelopes having a DR>10dB • But reducing the spectral cues causes in low SNR conditions a low SI.

  11. Frequency Resolution • What is Frequency resolution? • What happens to freq. resolution for the HI? • What does poor freq. resolution result in? • Implications for HA design

  12. What is Frequency resolution? If change in spectrum of speech causes some change in shape of excitation along basilar membrane => change exceeds listeners frequency resolution Else => frequency resolution was not fine enough to discriminate the spectral changes

  13. Frequency resolution for HI Statement: Cochlear damage results in poor freq. resolution But Auditory filter bandwidth increases with stimulus level… HOW DO YOU MEASURE FREQ. RESOLUTION? • Experimental setup: • Need normals and the HI to be at the same sensation level(SL) • Normals: Add broad band background noise to elevate threshold • Results: • Freq. resolution measured via tuning curves was worse for HI • More Upward spread of masking since LF slope of filter is shallower than for normals. • Conclusion: Freq. resolution is impaired by both: • Damaged auditory system • Necessity to listen to high stimulus

  14. What does poor frequency resolution result in? • Loose Spectral Cues • Formant peak information is lost • Smooths the internal spectral contrasts • Inability to distinguish between vowels • F1 & F2 frequencies Important cue for vowel ID • Increase in upper spread of masking • CVR-Consonant Vowel Recognition • HI have more problems understanding the speech in noise when compared to normals

  15. Implications for HA design Fact: For the HI, we have broader auditory filters • Sharpen spectral contrast • Narrow the BW of spectral peaks • Decrease level of spectral valleys • Not too much success since the broad filters overwhelm the sharpening technique • Multiband/wideband design: • Reduction in spectral cues • For Multiband correlate AGC in each band

  16. Noise Reduction HI people have abnormal difficult understanding speech in noise.

  17. Noise Reduction • HI need an SNR of 9dB • Broader auditory filters , reduced suppressions • Upper spread of masking • SNR improvement doesn’t often correlate with improved SI! • Noise removal algorithms • Single-microphone techniques • Multi-microphone techniques

  18. Single-microphone techniquesGeneral Considerations • Single stream has speech+noise • Need to evaluate continuously which frames have speech and which have noise • Improvements in SNR do not relate directly to improvements in SI • Need to evaluate performance of algorithm using Listening SI tests

  19. Single-microphone techniquesFrequency specific gain reduction • BILL-Bass increase at Low Levels • For noise reduction, bass decrease at high levels • Reduces LFs when the average gain in that region is high • Theoretically should help since the HI LFs mask the HF • LF has information about consonant features such as nasality, voicing etc which was lost • Cook et al in 1996 showed that if noise is LF, then HPF the speech resulted in significant improvement of SI • Festen et al in 1990 Envelope minima technique, reduce gain per band so that envelope minima (noise) is closer to hearing threshold level • Dynamic range based technique: attenuation for noise band inversely proportional to the measured DR

  20. Single-microphone techniquesSpectral subtraction • Subtract spectral magnitude of the noise estimate from the short term spectral magnitude of the signal • Assumes stationary noise • Uses same phase for the final noise reduced signal • SNR improves but SI is same. • Removes noise like cues required for fricatives.

  21. Multiple microphone techniques • What is Array Processing? • Omni directional microphones: 15mm separation between any two • Low frequency roll off of 6dB/octave Figure 7.16. Two directional patterns typically associated with hearing aid directional microphones. The angle represents the direction from which the sound is approaching the listener, with 0 degrees representing directly in front of the listener. The distance from the origin at a given angle represents the gain applied for sound from arriving from that direction, ranging here from 0 to 25dB. The patterns are a cardioid (left) and a hypercardioid (right). Figure 7.15. A typical configuration for a two-microphone (Mic) directional system. The delay to the back microphone determines the angle of the null in the directional pattern.

  22. Beamforming Frequency Dependent Delay and sum Frequency Independent

  23. Comparison with noise suppressor • Noise suppressor (NS) is the standard one used on iDEN phones

  24. Noise Cancellation • Use LMS algorithm • Problem is some speech is fed back to reference mic and gets canceled with noise. Figure 7.17. A typical two-microphone noise cancellation system. Ideally, the primary microphone measures a mixture of the interfering noise and the target speech, and the reference microphone measures only a transformation of the interfering noise.

  25. Conclusion • Parameters selection and fitting is a very difficult problem • Algorithms can make the sound more audible but not more intelligible • IHC have been ignored so far but they could have a role too. • It is difficult to get subjective scores from HI populations • No objective method can account for the non-linearities introduced by compression • Wearable HAs are an option for research but are inconvenient

  26. Array fundamentals • Speaker tracking is not possible with single microphone • Multiple microphones facilitate spatiotemporal filtering • Setup consists of two microphones with the first microphone assumed as origin • Distance of the wavefront from microphone is • The direction of source is given by

More Related