1 / 19

Juan Ortega 10/20/09 NTS490

Voice Biometrics. Juan Ortega 10/20/09 NTS490. Speaker recognition and Speech recognition. Speaker recognition is the computing task of validating a user’s claimed identity using characteristics extracted from their voices.

akamu
Download Presentation

Juan Ortega 10/20/09 NTS490

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Voice Biometrics Juan Ortega 10/20/09 NTS490

  2. Speaker recognition and Speech recognition Speakerrecognition is the computing task of validating a user’s claimed identity using characteristics extracted from their voices. Speakerrecognizes who is speaking, where as speech recognition recognizes what is being said. Voice recognition is a combination of the two where it uses learned aspects of a speakers voice to determine what is being said.

  3. History Speaker verification has co-evolved with the technologies of speech recognition and speech synthesis (TTS) because of the similar characteristics and challenges associated with each. 1960 - Gunnar Fant, a Swedish professor published a model describing the physiological components of acoustic speech production, based on the analysis of x-rays of individuals making specified phonic sounds. 1970 – Dr. Joseph Perkell used motion x-rays and included the tongue and jaw to expand on Fant’s model. Original speaker recognition systems used the average output of several analog filters to perform matching – often aided by humans.

  4. History cont. 1976 – Texas Instruments built a prototype system that was tested by the U.S. Air Force and The MITRE Corporation. Mid 1980s – The National Institute of Standards and Technology (NIST) developed the NIST Speech Group to study and promote the use of speech processing techniques. Since 1996 – Under funding from the NSA, the NIST Speech Group has hosted yearly evaluations, the NIST Speaker Recognition Workshop, to foster the continued advancement of the speaker recognition community.

  5. Differences in Voice The physiological component of voice recognition is related to the physical shape of an individuals vocal tract, which consists of an airway and the soft tissue cavities from which vocal sounds originate. The acoustic patterns of speech come from the physical characteristics of the airways. Motion of the mouth and pronunciations are the behavioral components of this biometric. This source sound is altered as it travels through the vocal tract, configured differently based on the position of the tongue, lips, mouth, and pharynx.

  6. Approach Speech samples are waveforms with time on the horizontal axis and loudness on the vertical access. The speaker recognition system analyzes the frequency content of the speech and compares characteristics such as the quality, duration, intensity, dynamics, and pitch of the signal. Voice Print - Spectrogram

  7. Speech Recognition r  eh k ao g n ay  z       s  p  iych "recognize speech" r  eh  k     ay     n  ay s     b  iych "wreck a nice beach"

  8. Speaker authentication and Speaker identification Two major applications of speaker recognition technologies and methodologies exist. Speaker authentication or verificationis the task of validating the identity the speaker claims to be. The verification is a 1:1 match where one speaker’s voice is matches against one template (called “voice print” or “voice model”). Speaker identification is the task of determining an unknown speaker’s identity. Identification is a 1:N match where it is compared against N templates.

  9. Text-Dependent, Text-Independent and Text-Prompted Methods Text-Dependent require the speaker to provide utterances (speak) of key words or sentences, the same text being used for both training and recognition. Text-Independent is when predetermined key words cannot be used. Human beings recognize speakers irrespective of the content of the utterance. Text-Prompted Methods prompts each user with a new key sentence every time the system is used.

  10. Normalization and Adaptation Techniques How can speaker recognitions normalize the variation of likelihood values in speaker verification? In order to compensate for the variations, two types of normalization techniques have been tried: parameter domain, and likelihood domain. Adaptation of the reference model as well as the verification threshold for each speaker is indispensable to maintaining a high recognition accuracy over a long period.

  11. Normalization and Adaptation Techniques cont. Parameter domain Spectral equalization (“blind equalization”)has been confirmed to be effective in reducing linear channel effects and long-term spectral variation.This method is especially effective for text-dependent speaker recognition applications using sufficiently long utterances. Likelihood domain Ratio is the conditional probability of the observed measurements of the utterance given the claimed identity is correct, to the conditional probability of the observed measurements given the speaker is an impostor. Posteriori probability method is calculated by using a set of speakers including the claimed speaker.

  12. Enrollment The quality/duration/loudness/pitch features are extracted from the submitted sample. The extracted sample is compared to the claimed identity and other models. The other-speakers models contain the “states” of a variety of individuals, not including that of the claimed identity. The input voice sample and enrolled models are compared to produce a “likelihood ratio”, indicating the likelihood of the input sample came from the claimed speaker.

  13. Updating Models and A Priori Threshold for Speaker Verification How to update speaker models to cope with the gradual changes in people’s voices. It is necessary to build each speaker model based on a small amount of data collected in a few sessions, and then the model must be updated using speech data collected when the system is used. The reference template for each speaker is updated by averaging new utterances and the present template after time registration. These methods have been extended and applied to text-independent and text-prompted speaker verification using HMMs.

  14. Underlying variations Hidden Markov Models (HMMs) are random based model that provides a statistical representation of the sounds produced by the individual. The HMM represents the underlying variations and temporal changes over time found in the speech states using quality/duration/intensity dynamics/pitch characteristics. Guassian Mixture Model (GMM) is a state-mapping model closely related to HMM, often used for “text-independent”. Uses the speaker’s voice to create a number of vector “states” representing the various sound forms. These methods all compare the similarities and differences between the input voice and the stores voice “states” to produce a recognition decision.

  15. Uses • Some companies use voiceprint recognition so people can gain access to information or give authorization without being physically present. • Instead of stepping up to an iris scanner or hand geometry reader, someone can give authorization by making a phone call. • Unfortunately, people can bypass some systems, particularly those that work by phone, with a simple recording of an authorized person's password. That's why some systems use several randomly-chosen voice passwords or use general voiceprints instead of prints for specific words.

  16. Weaknesses • Except for text-promoted systems, speaker recognition are susceptible to spoofing attacks through the use of recorded voice. • Text-dependent systems are less suitable for public use. • Noise in the background can be disruptive, although equalizers may be used to fix this problem. • Text-independent is currently under research, although methods have been proposed calculating the rhythm, speed, modulation, and intonation, based on personality type and parental influence. • Authentication is based on ratio and probability. • Frequent enrollment needs to happen to deal with voice changes. • Someone who is deaf or mute can’t use this type of biometrics.

  17. Advantages • All you need is software and a microphone. • Many methods have been proposed: • Text-Dependent • DTW-Based Methods • HMM-Based Methods • Text-Independent • Long-Term-Statistics-Based Methods • VQ-Based Methods • Ergodic-HMM-Based Methods • Speech-Recognition-Based Methods • Fast authentication. • Give someone else authentication.

  18. YouTube http://www.youtube.com/watch?v=0ec1Gtnlq1k

  19. Resources Speaker recognition. Retrieved October 20, from Wikipedia web site: http://en.wikipedia.org/wiki/Speaker_recognition Sadoki, Dr. F. (2008). Speaker Recognition. Retrieved October 20, from Scholarpediaweb site: http://www.scholarpedia.org/article/Speaker_recognition#DTW-Based_Methods The Speaker Recognition Homepage. Retrieved October 20, from speaker-recognition web site: http://www.speaker-recognition.org/ (2006). Speaker Recognition. Retrieved October 20, from biometrics web site: http://www.biometrics.gov/Documents/SpeakerRec.pdf Howstuffworks “How speech recognition works”. Retrieved October 21, from howstuffworksweb site: http://electronics.howstuffworks.com/gadgets/high-tech-gadgets/speech-recognition.htm/printable Wilson. T. Howstuffworks “Voiceprints”. Retrieved October 21, from howstuffworksweb site: http://science.howstuffworks.com/biometrics3.htm

More Related