1 / 24

Performance Analysis of Advanced Front Ends on the Aurora Large Vocabulary Evaluation

Performance Analysis of Advanced Front Ends on the Aurora Large Vocabulary Evaluation. • Author: Naveen Parihar Inst. for Signal and Info. Processing Dept. Electrical and Computer Eng. Mississippi State University • Contact Information: Box 9571 Mississippi State University

koreym
Download Presentation

Performance Analysis of Advanced Front Ends on the Aurora Large Vocabulary Evaluation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Performance Analysis of Advanced Front Endson the Aurora Large Vocabulary Evaluation • Author: Naveen Parihar Inst. for Signal and Info. Processing Dept. Electrical and Computer Eng. Mississippi State University • Contact Information: Box 9571 Mississippi State University Mississippi State, Mississippi 39762 Tel: 662-325-8335 Fax: 662-325-2298 Email: parihar@isip.msstate.edu • URL: http://www.isip.msstate.edu/publications/books/msstate_theses/2003/ advanced_frontends

  2. INTRODUCTION ABSTRACT The primary objective of this thesis was to analyze the performance of two advanced front ends, referred to as the QIO (Qualcomm, ICSI, and OGI) and MFA (Motorola, France Telecom, and Alcatel) front ends, on a speech recognition task based on the Wall Street Journal database. Though the advanced front ends are shown to achieve a significant improvement over an industry‑standard baseline front end, this improvement is not operationally significant. Further, we show that the results of this evaluation were not significantly impacted by suboptimal recognition system parameter settings. Without any front end-specific tuning, the MFA front end outperforms the QIO front end by 9.6% relative. With tuning, the relative performance gap increases to 15.8%. Finally, we also show that mismatched microphone and additive noise evaluation conditions resulted in a significant degradation in performance for both front ends.

  3. Message Source Linguistic Channel Articulatory Channel Acoustic Channel Features Observable: Message Words Sounds INTRODUCTION SPEECH RECOGNITION OVERVIEW A noisy communication channel model for speech production and perception: • Bayesian formulation for speech recognition: • P(W/A) = P(A/W) P(W) / P(A) • Objective: minimize word error rate by maximizing P(W/A) • Approach: maximize P(A/W) (training) • P(A/W): acoustic model (hidden Markov Model, Gaussians) • P(W): language model (Finite state machines, N-grams) • P(A): acoustics (ignored during maximization)

  4. INTRODUCTION BLOCK DIAGRAM APPROACH Core components: • Transduction • Feature extraction • Acoustic modeling (hidden Markov models) • Language modeling (statistical N-grams) • Search (Viterbi beam) • Knowledge sources

  5. Client/server applications • Evaluate robustness in noisy environments • Propose a standard for LVCSR applications INTRODUCTION AURORA EVALUATION OVERVIEW • WSJ 5K (closed task) with seven (digitally-added) noise conditions • Common ASR system • Two participants: QIO: QualC., ICSI, OGI; MFA: Moto., FrTel., Alcatel

  6. INTRODUCTION MOTIVATION ALV Evaluation Results • Aurora Large Vocabulary (ALV) evaluation goal was at least a 25% relative improvement over the baseline MFCC front end • Is the 31% relative improvement (34.5% vs. 50.3%) operationally significant ? • Generic baseline LVCSR system with no front end specific tuning • Would front end specific tuning change the rankings?

  7. EVALUATION PARADIGM THE AURORA – 4 DATABASE Acoustic Training: • Derived from 5000 word WSJ0 task • TS1 (clean), and TS2 (multi-condition) • Clean plus 6 noise conditions • Randomly chosen SNR between 10 and 20 dB • 2 microphone conditions (Sennheiser and secondary) • 2 sample frequencies – 16 kHz and 8 kHz • G.712 filtering at 8 kHz and P.341 filtering at 16 kHz • Development and Evaluation Sets: • Derived from WSJ0 Evaluation and Development sets • 14 test sets for each • 7 test sets recorded on Sennheiser; 7 on secondary • Clean plus 6 noise conditions • Randomly chosen SNR between 5 and 15 dB • G.712 filtering at 8 kHz and P.341 filtering at 16 kHz

  8. EVALUATION PARADIGM BASELINE LVCSR SYSTEM Training Data Standard context-dependent cross-word HMM-based system: • Acoustic models: state-tied4-mixture cross-word triphones • Language model: WSJ0 5K bigram • Search: Viterbi one-best using lexical trees for N-gram cross-word decoding • Lexicon: based on CMUlex • Real-time: 4 xRT for training and 15 xRT for decoding on an800 MHz Pentium Monophone Modeling CD-Triphone Modeling State-Tying CD-Triphone Modeling Mixture Modeling (2,4)

  9. / EVALUATION PARADIGM WI007 ETSI MFCC FRONT END Input Speech • Zero-mean debiasing • 10 ms frame duration • 25 ms Hamming window • Absolute energy • 12 cepstral coefficients • First and second derivatives Zero-mean and Pre-emphasis Energy Fourier Transf. Analysis Cepstral Analysis

  10. FRONT END PROPOSALS QIO FRONT END Input Speech • 10 msec frame duration • 25 msec analysis window • 15 RASTA-like filtered cepstral coefficients • MLP-based VAD • Mean and variance normalization • First and second derivatives Fourier Transform Mel-scale Filter Bank MLP-based VAD RASTA DCT Mean/Variance Normalization /

  11. Input Speech Noise Reduction VADNest Waveform Processing Cepstral Analysis Blind Equalization Feature Processing VAD / FRONT END PROPOSALS MFA FRONT END • 10 msec frame duration • 25 msec analysis window • Mel-warped Wiener filter based noise reduction • Energy-based VADNest • Waveform processing to enhance SNR • Weighted log-energy • 12 cepstral coefficients • Blind equalization (cepstral domain) • VAD based on acceleration of various energy based measures • First and second derivatives

  12. EXPERIMENTAL RESULTS FRONT END SPECIFIC TUNING • Pruning beams (word, phone and state) were opened during the tuning process to eliminate search errors. • Tuning parameters: • State-tying thresholds:solves the problem of sparsity of training data by sharing state distributions among phonetically similar states • Language model scale:controls influence of the language model relative to the acoustic models (more relevant for WSJ) • Word insertion penalty:balances insertions and deletions (always a concern in noisy environments)

  13. Parameter tuning clean data recorded on Sennhieser mic. (corresponds to Training Set 1 and Devtest Set 1 of the Aurora-4 database) 8 kHz sampling frequency 7.5% relative improvement EXPERIMENTAL RESULTS FRONT END SPECIFIC TUNING - QIO

  14. EXPERIMENTAL RESULTS FRONT END SPECIFIC TUNING - MFA • Parameter tuning • clean data recorded on Sennhieser mic. (corresponds to Training Set 1 and Devtest Set 1 of the Aurora-4 database) • 8 kHz sampling frequency • 9.4% relative improvement • Ranking is still the same (14.9% vs. 12.5%) !

  15. EXPERIMENTAL RESULTS COMPARISON OF TUNING • Same Ranking: relative performance gap increased from9.6% to 15.8% • On TS1, MFA FE significantly better on all 14 test sets (MAPSSWE p=0.1%) • On TS2, MFA FE significantly better only on test sets 5 and 14

  16. 40 30 ETSI MFA QIO 20 10 0 Sennheiser Secondary EXPERIMENTAL RESULTS MICROPHONE VARIATION • Train on Sennheiser mic.; evaluate on secondary mic. • Matched conditions result in optimal performance • Significant degradation for all front ends on mismatched conditions • Both QIO and MFA provide improved robustness relative to MFCC baseline

  17. EXPERIMENTAL RESULTS 70 • Performance degrades on noise condition when systems are trained only on clean data • Both QIO and MFA deliver improved performance 60 50 40 30 ETSI MFA QIO 20 10 0 TS2 TS3 TS4 TS5 TS6 TS7 40 • Exposing systems to noise and microphone variations (TS2) improves performance 30 20 10 0 TS2 TS3 TS4 TS5 TS6 TS7 ADDITIVE NOISE

  18. SUMMARY AND CONCLUSIONS WHAT HAVE WE LEARNED? • Both QIO and MFA front ends achieved ALV evaluation goal of improving performance by at least 25% relative over ETSI baseline • WER is still high ( ~ 35%), human benchmarks have reported low error rates (~1%). Improvement in performance is not operationally significant • Front end specific parameter tuning did not result in significant change in overall performance (MFA still outperforms QIO) • Both QIO and MFA front ends handle convolution and additive noise better than ETSI baseline

  19. SUMMARY AND CONCLUSIONS FUTURE WORK • The contribution of each of the advanced noise robust algorithms to the overall improvement in performance can be calibrated in isolation • Recognition system parameter tuning can be performed on the multi-condition training data and multi-condition testing data, that are representative of various noise types, microphone types, etc. • The improvements in the advanced noise robust algorithms needs to be verified with a recognition system that utilizes more state of the art features, such as speaker normalization, speaker and channel adaptation, and discriminative training

  20. SUMMARY AND CONCLUSIONS ACKNOWLEDGEMENTS • I would like to thank Dr. Joe Picone for his mentoring and guidance through out my graduate program • I wish to acknowledge David Pearce of Motorola Labs, Motorola Ltd., United Kingdom, and Guenter Hirsch of Niederrhein University of Applied Sciences, Germany, for their invaluable collaborations and direction on some portions of this thesis. • I would also like to thank Jon Hamaker for answering my queries on ISIP recognition software, and Ram Sundaram for introducing me to the art of running an recognition experiment • I would like to thank Dr. Georgious Lazarou and Dr. Jeff Jonkman for being on my committee • Finally, I would like to thank my co-workers (former and current) at the Institute for Signal and Information Processing (ISIP) for all their help

  21. APPENDIX BRIEFBIBLIOGRAPHY • N. Parihar, J. Picone, D. Pearce, and H.G. Hirsch, “Performance Analysis of the Aurora Large Vocabulary Baseline System,” submitted to the ICASSP, Montreal, Canada, May 2004. • N. Parihar, and J. Picone, “An Analysis of the Aurora Large Vocabulary Evaluation,” Proceeding of the Eurospeech2003, Geneva, Switzerland, September 2003. • N. Parihar, J. Picone, D. Pearce, and H.G. Hirsch, “Performance Analysis of the Aurora Large Vocabulary Baseline System,” submitted to the Eurospeech 2003, Geneva, Switzerland, September 2003. • N. Parihar and J. Picone, “DSR Front End LVCSR Evaluation - AU/384/02,” Aurora Working Group, European Telecommunications Standards Institute, December 06, 2002. • D. Pearce, “Overview of Evaluation Criteria for Advanced Distributed Speech Recognition,” ETSI STQ-Aurora DSR Working Group, October 2001. • G. Hirsch, “Experimental Framework for the Performance Evaluation of Speech Recognition Front-ends in a Large Vocabulary Task,” ETSI STQ-Aurora DSR Working Group, December 2002. • “ETSI ES 201 108 v1.1.2 Distributed Speech Recognition; Front-end Feature Extraction Algorithm; Compression Algorithm,” ETSI, April 2000.

  22. Aurora Project Website: recognition toolkit, multi-CPU scripts, database definitions, publications, and performance summary of the baseline MFCC front end • Speech Recognition Toolkits: compare front ends to standard approaches using a state of the art ASR toolkit • ETSI DSR Website: reports and front end standards APPENDIX AVAILABLE RESOURCES

  23. APPENDIX PROGRAM OF STUDY PROGRAM OF STUDY

  24. APPENDIX PUBLICATIONS • N. Parihar, J. Picone, D. Pearce, and H.G. Hirsch, “Performance Analysis of the Aurora Large Vocabulary Baseline System,” submitted to the ICASSP, Montreal, Canada, May 2004. • N. Parihar, and J. Picone, “An Analysis of the Aurora Large Vocabulary Evaluation,” Proceeding of the Eurospeech2003, Geneva, Switzerland, September 2003. • F. Zheng, J. Hamaker, F. Goodman, B. George, N. Parihar, and J. Picone, “The ISIP 2001 NRL Evaluation for Recognition of Speech in Noisy Environments,” Speech In Noisy Environments (SPINE) Workshop, Orlando, Florida, USA, November 2001. • B. Jelinek, F. Zheng, N. Parihar, J. Hamaker, and J. Picone, “Generalized Hierarchical Search in the ISIP ASR System,” Proceedings of the Thirty-Fifth Asilomar Conference on Signals, Systems, and Computers, vol. 2, pp. 1553-1556, Pacific Grove, California, USA, November 2001.

More Related