1 / 43

CS 188: Artificial Intelligence Spring 2007

CS 188: Artificial Intelligence Spring 2007. Speech Recognition 03/20/2007. Srini Narayanan – ICSI and UC Berkeley. Announcements. Midterm graded Median 78 Mean 75 5 100+s 25 % above 90 HW 5 up today BN inference Due 4/9 (2 weeks + Spring break). Hidden Markov Models.

cdominguez
Download Presentation

CS 188: Artificial Intelligence Spring 2007

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CS 188: Artificial IntelligenceSpring 2007 Speech Recognition 03/20/2007 Srini Narayanan – ICSI and UC Berkeley

  2. Announcements • Midterm graded • Median 78 • Mean 75 • 5 100+s • 25 % above 90 • HW 5 up today • BN inference • Due 4/9 (2 weeks + Spring break)

  3. Hidden Markov Models • Hidden Markov models (HMMs) • Underlying Markov chain over states X • You observe outputs (effects) E at each time step • As a Bayes’ net: • Several questions you can answer for HMMs: • Last time: filtering to track belief about current X given evidence • Last time: Vitterbi estimation to compute most likely sequence X1 X2 X3 X4 X5 E1 E2 E3 E4 E5

  4. Real HMM Examples • Speech recognition HMMs: • Observations are acoustic signals (continuous valued) • States are specific positions in specific words (so, tens of thousands) • Machine translation HMMs: • Observations are words (tens of thousands) • States are translation positions (dozens) • Robot tracking: • Observations are range readings (continuous) • States are positions on a map (continuous)

  5. The Speech Recognition Problem • We want to predict a sentence given an acoustic sequence: • The noisy channel approach: • Build a generative model of production (encoding) • To decode, we use Bayes’ rule to write • Now, we have to find a sentence maximizing this product

  6. The noisy channel model • Ignoring the denominator leaves us with two factors: P(Source) and P(Signal|Source)

  7. Acoustic Modeling Describes the sounds that make up speech Speech Recognition SpeechRecognition Knowledge Sources Lexicon Describes which sequences of speech sounds make up valid words Language Model Describes the likelihood of various sequences of words being spoken

  8. Digitizing Speech

  9. Speech in an Hour • Speech input is an acoustic wave form s p ee ch l a b “l” to “a” transition: Graphs from Simon Arnfield’s web tutorial on speech, Sheffield: http://www.psyc.leeds.ac.uk/research/cogn/speech/tutorial/

  10. She just had a baby • What can we learn from a wavefile? • Vowels are voiced (vocal cord vibrates), long, loud • Length in time = length in space in waveform picture • Voicing: regular peaks in amplitude • When stops closed: no peaks: silence. • Peaks = voicing: .46 to .58 (vowel [i], from second .65 to .74 (vowel [u]) and so on • Silence of stop closure (1.06 to 1.08 for first [b], or 1.26 to 1.28 for second [b]) • Fricatives (f tongue hits upper teeth) like [] intense irregular pattern; see .33 to .46

  11. Spectral Analysis • Frequency gives pitch; amplitude gives volume • sampling at ~8 kHz phone, ~16 kHz mic (kHz=1000 cycles/sec) • Fourier transform of wave displayed as a spectrogram • darkness indicates energy at each frequency s p ee ch l a b amplitude frequency

  12. Adding 100 Hz + 1000 Hz Waves

  13. Spectrum Frequency components (100 and 1000 Hz) on x-axis Amplitude 1000 Frequency in Hz 100

  14. Back to Spectra • Spectrum represents these freq components • Computed by Fourier transform, algorithm which separates out each frequency component of wave. • x-axis shows frequency, y-axis shows magnitude (in decibels, a log measure of amplitude) • Peaks at 930 Hz, 1860 Hz, and 3020 Hz.

  15. Articulation Process • Articulatory facts: • Vocal cord vibrations create harmonics • The mouth is a selective amplifier • Depending on shape of mouth, some harmonics are amplified more than others

  16. Vowel [i] sung at successively higher pitch. 2 1 3 5 6 4 7 Figures from Ratree Wayland slides from his website

  17. Acoustic Feature Sequence • Time slices are translated into acoustic feature vectors (~39 real numbers per slice) • These are the observations, now we need the hidden states X frequency ……………………………………………..e12e13e14e15e16………..

  18. State Space • P(E|X) encodes which acoustic vectors are appropriate for each phoneme (each kind of sound) • P(X|X’) encodes how sounds can be strung together • We will have one state for each sound in each word • From some state x, can only: • Stay in the same state (e.g. speaking slowly) • Move to the next position in the word • At the end of the word, move to the start of the next word • We build a little state graph for each word and chain them together to form our state space X

  19. HMMs for Speech

  20. Phones are not homogeneous!

  21. Each phone has 3 subphones

  22. Resulting HMM word model for “six”

  23. ASR Lexicon: Markov Models

  24. Speech Architecture meets Noisy Channel

  25. Search space with bigrams

  26. Markov Process with Bigrams Figure from Huang et al page 618

  27. Decoding • While there are some practical issues, finding the words given the acoustics is an HMM inference problem • Here the state sequence is the sequence of phones • Observations are the acoustic vectors • We want to know which state sequence x1:T is most likely given the evidence e1:T:

  28. Viterbi Algorithm • Question: what is the most likely state sequence given the observations? • Slow answer: enumerate all possibilities • Better answer: cached incremental version

  29. It's not easy to wreck a nice beach.It's not easy to recognize speech.

  30. CONVERSATIONAL SPEECH 100 Non-English English 50 READ SPEECH 5000 word BROADCAST NEWS 20,000 Word 1000 Word vocabulary Varied microphones Word Error Rate (%) 10 Standard microphone Noisy environment Unlimited Vocabulary All results are Speaker -Independent 1 1988 1992 1993 1989 1990 1991 1994 1995 1996 1997 1998 Continual Progress in Speech Recognition Increasingly Difficult Tasks, Steadily Declining Error Rates NSA/Wayne/Doddington

  31. Task Vocabulary Error Rate% Digits 11 0.55 WSJ read speech 5K 3.0 WSJ read speech 20K <6.6 Broadcast news 64,000+ 9.9 Conversational Telephone 64,000+ 20.7 Current error rates

  32. Dynamic Bayes Nets DBN = Multiple Hidden State Variables. Each State is a BN

  33. Structured Probabilistic Inference

  34. Next Class • Next part of the course: machine learning • We’ll start talking about how to learn model parameters (like probabilities) from data • One of the most heavily used technologies in all of AI

  35. Extra Slides

  36. Examples from Ladefoged pad bad spat

  37. Simple Periodic Sound Waves • Y axis: Amplitude = amount of air pressure at that point in time • Zero is normal air pressure, negative is rarefaction • X axis: time. Frequency = number of cycles per second. • Frequency = 1/Period • 20 cycles in .02 seconds = 1000 cycles/second = 1000 Hz

  38. Deriving Schwa • Reminder of basic facts about sound waves • f = c/ • c = speed of sound (approx 35,000 cm/sec) • A sound with =10 meters: f = 35 Hz (35,000/1000) • A sound with =2 centimeters: f = 17,500 Hz (35,000/2)

  39. From Sundberg

  40. Computing the 3 Formants of Schwa • Let the length of the tube be L • F1 = c/1 = c/(4L) = 35,000/4*17.5 = 500Hz • F2 = c/2 = c/(4/3L) = 3c/4L = 3*35,000/4*17.5 = 1500Hz • F1 = c/2 = c/(4/5L) = 5c/4L = 5*35,000/4*17.5 = 2500Hz • So we expect a neutral vowel to have 3 resonances at 500, 1500, and 2500 Hz • These vowel resonances are called formants

  41. HMMs for Continuous Observations? • Before: discrete, finite set of observations • Now: spectral feature vectors are real-valued! • Solution 1: discretization • Solution 2: continuous emissions models • Gaussians • Multivariate Gaussians • Mixtures of Multivariate Gaussians • A state is progressively: • Context independent subphone (~3 per phone) • Context dependent phone (=triphones) • State-tying of CD phone

  42. Viterbi Decoding

More Related