1 / 11

Overview

Overview. Recall What are sound features? Feature detection and extraction Features in Sphinx III. Speech signal is ‘slowly’ time varying singnal There are a number of linguistically distinct speech sounds (phonemes) in a language.

vilina
Download Presentation

Overview

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Overview • Recall • What are sound features? • Feature detection and extraction • Features in Sphinx III

  2. Speech signal is ‘slowly’ time varying singnal There are a number of linguistically distinct speech sounds (phonemes) in a language. It is possible to represent the sound spectrogram in a 3D spectrogram of the speech intensity and the different frequency bands over time Most SR systems rely heavily on vowel recognition to achieve high performance (they are long in duration and spectrally well defined and therefore easily recognized) Recall:

  3. Speech sounds and features Examples: • Vowels (a, u, …) • Diphthongs (f.i. ay as in guy, …) • Semivowels (w, l, r, y) • Nasal Consonants (m, n) • Unvoiced Fricatives (f, s) • Voiced Fricatives (v, th, z) • Voiced and Unvoiced Stops (b, d, g) • They all have their own characteristics (features)

  4. ASR Stages 1) speech analysis system: to provide an appropriate spectral representation of the characteristics of the time-varying speech signal  2) feature detection stage: to convert the spectral measurements to a set of features that describe the broad acoustic properties of the different phonetic units (f.i. nasality, frication, formant locations, voiced-unvoiced classification, ratios of high- and low-frequency energy, etc.) 3) segmentation and labeling phase: to find stable regions and then label the segmented region according to how well the features within that region match those of individual phonetic units 4) final output of the recognizer is the word or word sequence that best matches

  5. Feature detection (and extraction) • Speech segment contains certain characteristics, features. • Different segments of speech contain different features, specific for the kind of segment! • Goal is to try to classify a speech segment into one of several broad speech classes (f.i. via binary tree: compact/diffuse, acute/grave, long/short, high/low frequency, etc) • Ideally, feature vectors for a given word should hopefully be the same regardless of the way in which the word has been uttered

  6. Last week: Mel-Frequency Ceptrum Coefficient • Fourier Transform extracts the frequency components of a signal in the time domain • Frequency domain is filtered/sliced in 12 smaller parts, where for each it’s own coefficient (MFCC) can be calculated • MFCC's use the log-spectrum of the speech signal. The logarithmic nature of the technique is significant since the human auditory system perceives sound on a logarithmic scale above certain frequencies

  7. Acoustic Modeling: Feature Extraction • MFCC’s are beautiful, • because they incorporate • knowledge of the nature of • speech sounds in measurement • of the features. • Utilize rudimentary models of • human perception. • Fourier Transform time • domain  frequency domain • Frequency domain is sliced in • 12 smaller parts with each it’s own MFCC • Include absolute energy and • 12 spectral measurements. • Time derivatives to model spectralchange Fourier Transform Input Speech Cepstral Analysis Perceptual Weighting Time Derivative Time Derivative Energy + Mel-Spaced Cepstrum Delta Energy + Delta Cepstrum Delta-Delta Energy + Delta-Delta Cepstrum

  8. What ‘to do’ with the MFCC’s: • A speech recognizer can be built using the energy values (time domain) and 12 MFCC's (frequency domain), plus the first and second order derivatives of those coefficients. 13 (Absolute Energy (1) and MFCCs (12)) 13 (Delta First-order derivatives of the 13 absolute coefficients) 13 (Delta-Delta Second-order derivatives of the 13 absolute coefficients) ------------------------------------------------ 39TotalBasic MFCC Front End • The derivatives are useful because they provide information about the spectralchange • These total of 39 coefficients will provide information about the different features in that segment! • The feature measurements of the segments are stored in so called ‘feature vectors’, that can be used in the next stage of the speech recognition (f.i. Hidden Markov Model)

  9. In Sphinx III:computation of feature vectors • feat_s2mfc2feat • feat_s2mfc2feat_block • MFC file is read • Initialization: defining the kind of input->feature conversion desired (there are some differences between Sphinx II and Sphinx III) • Feature vectors are computed for the entire segment specified (feat_s2mfc2feat and feat_s2mfc2feat_block) In Sphinx in the feature vectors, the streams of features are stored as follows: • CEP: C1-C12 • DCEP: D1-D12 • Energy values: C0, D0, DD0 • D2CEP: DD1-DD12

  10. voicing round a1 a2 : a5 a6 nasal glide frication burst • So, at this point in the speech recognition process, you have stored feature vectors for the entire speech segment you are looking at, providing the necessary information about what kind features are in that segment. • Now, The feature stream can be analyzed using a Hidden-Markov Model (HMM) “one” … … … … “two” Train Concat. : : … … “oh” The feature stream is analyzed using a Hidden-Markov Model (HMM) Input speech Feature Extraction Modules Feature Vector

More Related