1 / 33

Bootstrap TDNN for Classification of Voiced Stop Consonants (B,D,G)

Bootstrap TDNN for Classification of Voiced Stop Consonants (B,D,G). CAIP, Rutgers University Oct.13, 2006. Jun Hou, Lawrence Rabiner and Sorin Dusan. Outline. Review TDNN basics Bootstrap TDNN using categories Model lattice Experiments Discussion and future work. ASAT Paradigm. 1. 2.

pier
Download Presentation

Bootstrap TDNN for Classification of Voiced Stop Consonants (B,D,G)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Bootstrap TDNN for Classification of Voiced Stop Consonants (B,D,G) CAIP, Rutgers University Oct.13, 2006 Jun Hou, Lawrence Rabiner and Sorin Dusan

  2. Outline • Review TDNN basics • Bootstrap TDNN using categories • Model lattice • Experiments • Discussion and future work

  3. ASAT Paradigm 1 2 3 4 5. Overall System Prototypes and Common Platform

  4. Previous Research • Frame based method • Used MLP to detect the 14 Sound Pattern of English features for the 61 phoneme TIMIT alphabet using single frame of MFCC’s • Major problem • Frames capture static properties of speech • Need dynamic information when detection dynamic features and sounds • Need segment-based methods rather than frame-based methods

  5. Phoneme Hierarchy Phonemes • Bottom-up approach Vowels Diphthongs Semivowels Consonants AW AY EY OY W L R Y Whisper H Front Mid Back Nasals M N NG Affricates J CH High IY AA UW Mid IH EH ER AX UH Step 1 Stops Fricatives Low AE AO OW Step 2 Voiced B D G Unvoiced P T K Voiced V DH Z ZH Unvoiced F TH S SH Step 3 Step 4 39 phonemes – 11 vowels, 4 diphthongs, 4 semivowels, 20 consonants • Classify voiced stop consonants: /B/, /D/ and /G/ using segment based methods (this is the hardest classification problem among the consonants)

  6. Voiced Stop Consonants Classification • B, D, and G tokens in the TIMIT training and test sets, without the SA sentences • * C V form of tokens • * - preceding phoneme can be any sound • C - B, D, or G • V - vowel or diphthong • 10 msec windows, 150 msec segments (15 frames) • The beginning of the vowel is at the 10th frame Any preceding phoneme (s) burst vowel 10th frame 15th frame 1st frame • Distribution (in # tokens and percentage)

  7. Voiced Stop Consonants Classification Short stop gap Medium stop gap Long stop gap • Example – /b/ (speech wave, 10 * log(energy), spectrogram) “… and become …” Stop gap = 396 (24.74 msec) 48215 48855 ix 48855 49684 n 49684 50080 bcl 50080 50321 b 50321 51400 iy 51400 52360 kcl 52360 53463 k 53463 54600 ah 54600 55560 m “ … that big goose …” Stop gap = 2260 (141 msec) 20580 20850 dh 20850 22920 ae 22920 25180 tcl 25180 25460 b 25460 27320 ih 27320 29270 gcl 29270 29850 g 29850 32418 ux 32418 34506 s “… judged by …” Stop gap = 1100 (68.75 msec) 27770 28970 jh 28970 31960 ah 31960 32619 dcl 32619 33810 jh 33810 34280 dcl 34280 34840 d 34840 35940 bcl 35940 36180 b 36180 37800 ay

  8. Voiced Stop Consonants Classification • Example – /d/ (speech wave, 10 * log(energy), spectrogram) Long stop gap Short stop gap Medium stop gap “… scampered across …” Stop gap = 398 (24.88 msec) 16560 17100 pcl 17100 17530 p 17530 18162 axr 18162 18560 dcl 18560 18800 d 18800 19360 ix 19360 20150 kcl 20150 21080 k 21080 21640 r “A doctor …” Stop gap = 800 (50 msec) 2360 3720 ey 3720 4520 dcl 4520 4760 d 4760 7060 aa 7060 8920 kcl 8920 9480 t 9480 10443 axr “Does …” Stop gap = silence = 1960 (122.5 msec) 0 1960 h# 1960 2440 d 2440 3800 ah 3800 5413 z

  9. Voiced Stop Consonants Classification • Example – /g/ (speech wave, 10 * log(energy), spectrogram) Short stop gap Medium stop gap Long stop gap “ …, give or take …” Stop gap = 3022 (188.88 msec) 24800 27822 pau 27822 28280 g 28280 29080 ih 29080 29960 v 29960 30840 axr 30840 31480 tcl 31480 32360 t 32360 34200 ey 34200 34966 kcl 34966 35400 k “ May I get …” Stop gap = 430 (26.88 msec) 0 2080 h# 2080 2720 m 2720 4344 ey 4344 6080 ay 6080 6510 gcl 6510 6930 g 6930 8141 ih 8141 9170 tcl 9170 9492 t “ … a good mechanic …” Stop gap = 960 (60 msec) 30340 36313 pau 36313 36762 q 36762 37720 ah 37720 38680 gcl 38680 39101 g 39101 40120 uh 40120 41000 dcl 41000 41600 m 41600 42200 ix

  10. Time-Delay Neural Network • Developed by A. Waibel etc. • Effective in classifying dynamic sounds, like voiced stop consonants • Introduces delays into the input of each layer of a regular MLP • The inputs of a unit are multiplied by the un-delayed weights and the delayed weights, then summed and passed through a nonlinear function

  11. 3 6 9 24 99 249 780 3135 Jitter Problems in TDNN Training • Slow convergence • during error back propagation, the weight increments are smeared when averaging on the sets of time-delayed weights • Requires staged batch training • initially trained on a small number of tokens • after convergence, gradually add more tokens to the training set balanced Unbalanced Hand selected Bootstrapping begins here

  12. Training Solution A well designedbootstrap training method

  13. Bootstrap Training Introduction • A bootstrap sample utilizes tokens from the original dataset, by sampling with replacement • The statistics are calculated on each bootstrap sample • Estimate standard error, etc. on the bootstrap replications Bootstrap replications S(X*1) S(X*2) S(X*B) Bootstrap samples X*1 X*2 X*B X=(x1, x2, …xn) Training dataset

  14. Bootstrap TDNN • Because of the slow convergence of a TDNN, it is difficult (and time consuming) to repeat the training of a TDNN many times (more than 200 times for normal bootstrap training experiments) • Instead of resampling individual tokens, we build a bootstrap sample by resampling clusters of tokens • Need to find a way to partition the input space into a small number of clusters which we call categories

  15. Bootstrap TDNN – Concept • Begin with a good starting point • use a small set of hand selected tokens to train the initial TDNN • Partition the training set into subsets • use the initial TDNN to partition the training set into several subsets which we call “Categories”, and a subsequent TDNN is trained on each category • Expand each category • iteratively use the TDNN to partition the remaining (unutilized) training data into categories; merge the tokens in the category with the previous training data, and train a new TDNN based on the merged data • Merge final categories • merge the tokens in the categories (in a sequenced manner) and train the final TDNNs based on the union of the categories • Use an n-best list to combine the TDNN scores to give the final segment classification

  16. Bootstrap TDNN – Notation • Double thresholds – a high score threshold (φmax) and a low score threshold (φmin) • Good score and bad score – if, and only if, one phoneme has a score above φmax and the other two phonemes have scores below φmin, the classification score is considered as a good score. All other cases are treated as bad scores. • Segmentation rule: • Category 1 — good score and correct classification • Category 2 — good score and incorrect classification • Category 3 — bad score and correct classification • Category 4 — bad score and incorrect classification

  17. Bootstrap TDNN – Procedure • (1) Use the balanced training set of 99 hand-selected tokens (i.e., 33 tokens each of /B/, /D/, and /G/) as the initial training set, and train a single TDNN • (2) The current set of TDNNs (one TDNN initially, four TDNNs at later stages of training) is used to score and segment the complete set of training tokens into 4 Categories (based on the double threshold procedure) • (3) Add selected and balanced (equal number of /B/, /D/ and /G/ tokens) training tokens from the above four categories to the old training data, and train a new set of four updated TDNNs. • (4) Iterate steps (2) and (3) until a stopping criterion is met • stopping criteria: there are no more new tokens to be added; the desired TDNN performance is met. • (5) Merge the tokens from the four training categories in a sequenced manner to obtain a new TDNN. Use a beam search to select the best sequence for merging the data from the four categories

  18. Bootstrap TDNN – Illustration • Use the 99 hand selected tokens to train a TDNN, and partition the initial input space into 4 categories 954 309 525 688 I(1) II(1) 99 TDNN III(1) IV(1) 863 1081 429 612 A circle denotes balanced tokens in this category

  19. Bootstrap TDNN – Illustration • Merge 99 tokens and the balanced tokens in one category, and train a TDNN • Use the TDNN to partition the remaining space into 4 categories II(2) I(2) 99 TDNN …… I(1) III(2) IV(2) • Iterate until the TDNN performance is met, or there are no more new and balanced training tokens to be added to the previous training data 99 TDNN II(n+1) III(n+1) IV(n+1) I(1) … I(n)

  20. Merge Categories – Model Lattice • Use a forward lattice to merge the four different category TDNNs • after the initial categories are established, we create a bootstrap sample consisting of some or all of the categories, selected in a sequenced manner • Partial lattice – select category samples without replacement • Full lattice – select category samples with replacement

  21. Step 1 Step 2 Step 3 Step 4 Cat 1 Cat 2 Cat 3 Cat4 Partial Lattice • Starting point: 4 TDNNs built on each of the 4 category TDNNs • At each step • select a category without replacement of the previous categories • merge the data in this category and the data in previous step • build a TDNN on the union of the data • Iterate until all the categories are merged together or the TDNN performance is met • Use a beam search to select the best path(s) for merging categories to obtain the best set of TDNNs

  22. Partial Lattice • Cross node beam search – compare TDNNs that are trained on the same categories but in different sequences Cat (I) Cat (II) Net 1 Net 2 Cat. (I) U Cat.(II) Net 12 Net 21 Net(1,2) If beam width = 1, select the best net between net12 and net21 • Beam search comparison criterion • performance on the complete training set; or • weighted sum of performances on the 99 hand selected tokens, the training set, and the complete training set.

  23. Step 0 Step 1 Step 2 Step n-1 Step n Cat 1 …… Cat 2 …… Cat 3 …… Cat 4 …… Full Lattice • Select categories with replacement • Regular beam search

  24. Experiments • Training set and test set • TIMIT training set and test set without the SA sentences • *CV form tokens, where C denotes /B/, /D/ or /G/, V denotes any vowel or diphthong and * denotes any previous sound • /B/ - 1567; /D/ - 1460; /G/ - 658. 3685 tokens in training set • /B/ - 638; /D/ - 537; /G/ - 243. 1418 tokens in test set. • 13 MFCCs calculated on a 10 msec window with 5 msec window overlap • average adjacent frames resulting in 10 msec frame rate • segment length: 150 msec (15 frames, with the beginning of the succeeding vowel at the 10th frame) • TDNN • inputs: 13 mfccs, 15 frames → 195 input nodes • 1st hidden layer: 8 nodes, delay D1 = 2; • 2nd hidden layer: 3 nodes, delay D2 = 4; • output layer: 3 nodes, one each for /B/, /D/, and /G/.

  25. Baseline Results • A single TDNN trained using staged batch training of 3, 6, 9, 24, 99, 249, 780, 3685 tokens. • Train the TDNN on the 99 tokens and test on the same 99 tokens • After staged batch training • performance on the complete training set: 91.8% • performance on the test set: 82.0% • After bootstrapping training + lattice decision, need 68% of training set to achieve comparable results

  26. TDNN performance after stopping criteria are met Results of the 4 Category TDNNs

  27. Number of tokens used in each category Results of the 4 Category TDNNs

  28. Results on Partial Lattice • Using an n-best list method to score all 4 models and choosing the highest score for each of /B/, /D/ and /G/. The maximum of the 3 highest scores provides the final classification decision. • performance on the complete training set: 93.1% • performance on the complete test set: 82.1% • 35% error reduction on the complete training set, over the best model trained using data from all 4 categories • 18% error reduction on the complete training set compared with a single TDNN trained using all the 3685 tokens that achieved 91.8% on the training set and 82.0% on the test set

  29. Bootstrap - Discussion • Bootstrapping is an effective procedure to guarantee convergence in robust training of TDNNs • The problem with bootstrapping is that the TDNN needs to be trained several times, which is quite time consuming • In order to reduce the total number of training cycles, we use a beam search method to prune the path for merging different categories of data • The results showed that, although trained on a relatively small portion of all the training data (approximately 68%), the TDNN achieved better performance on the complete training set and concomitant improvement on the test set

  30. A Few Issues – Shifting • The difficulty in TDNN training lies in the shift-invariant nature of the TDNN. For the set of voiced stop consonants, the stop regions can appear at any 30 msec window during the 150 msec segment. • The previous vowel affects the BDG articulation and provides information useful for classification • We can make a TDNN converge faster (to a better solution) by appropriately shifting frames for tokens in categories (II), (III), and (IV).

  31. Frame Shifting • Train TDNN on the 99 long stop gap hand selected tokens; test on the same 99 tokens • Shift right 4 frames for tokens in categories (II), (III) and (IV) • Train the TDNN again, and test on the 99 tokens Number of tokens in each category

  32. Discussion and Future Work • Examine bootstrapping more closely, to reduce the total number of bootstrap iterations, and to improve model accuracy • Segment length affects classification accuracy – 150 msec can contain more than one short stop consonants → use DTW to map the input frames to a fixed number of frames and then use the aligned tokens to train a TDNN • Investigate TDNN for classification of other phoneme classes, e.g., voiced fricatives, diphthongs, etc. • Use frame-based methods for classification of static sounds (e.g. vowels, unvoiced fricatives); use segment-based methods to recognize dynamic sounds (e.g., voiced stop consonants, diphthongs) • Develop a bottom-up approach to build small but accurate classifiers first, then gradually classify broader classes in the phoneme hierarchy

  33. Thank you!

More Related