1 / 38

gPU -ACCELERATED hmm FOR Speech Recognition

HUCAA 2014. gPU -ACCELERATED hmm FOR Speech Recognition. Leiming Yu, Yash Ukidave and David Kaeli ECE, Northeastern University. Outline. Background & Motivation HMM GPGPU Results Future Work. Background. Translate Speech to Text Speaker Dependent Speaker Independent

halen
Download Presentation

gPU -ACCELERATED hmm FOR Speech Recognition

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. HUCAA 2014 gPU-ACCELERATED hmm FOR Speech Recognition Leiming Yu, YashUkidave and David KaeliECE, Northeastern University

  2. Outline • Background & Motivation • HMM • GPGPU • Results • Future Work

  3. Background • Translate Speech to Text • Speaker DependentSpeaker Independent • Applications* Natural Language Processing * Home Automation * In-car Voice Control * Speaker Verifications * Automated Banking * Personal Intelligent Assistants Apple Siri Samsung S Voice * etc. [http://www.kecl.ntt.co.jp]

  4. DTW Dynamic Time WarpingA template-based approach to measure similarity between two temporal sequences which may vary in time or speed. [opticalengineering.spiedigitallibrary.org]

  5. DTW Dynamic Time Warping DTW Pros: 1) Handle timing variation 2) Recognize Speech at reasonable cost DTW Cons: 1) Template Choosing 2) Ending point detection (VAD, acoustic noise) 3) Words with weak fricatives, close to acoustic background For i := 1 to n For j := 1 to m cost:= D(s[i], t[j]) DTW[i, j] := cost + minimum(DTW[i-1, j ], DTW[i, j-1], DTW[i-1, j-1])

  6. Neural Networks Algorithms mimics the brain. Simplified Interpretation: * takes a set of input features * goes through a set of hidden layers * produces the posterior probabilities as the output

  7. Neural Networks Parking Meter Bike Pedestrian Car If Pedestrian “activation” of unit in layer matrix of weights controlling function mapping from layer to layer [Machine Learning, Coursera]

  8. Neural Networks Equation Example

  9. Neural Networks Example Hint: * effective in recognizing individual phones isolated words as short-time units * not ideal for continuous recognition tasks largely due to the poor ability to model temporal dependencies.

  10. Hidden Markov Model In a Hidden Markov Model, * the states are hidden * output that depend on the states are visible x — states y — possible observations a — state transition probabilities b — output probabilities [wikipedia]

  11. Hidden Markov Model The temporal transition of the hidden states fits well with the nature of phoneme transition. Hint: * Handle temporal variability of speech well * Gaussian mixture models(GMMs), controlled by the hidden variables determine how well a HMM can represent the acoustic input. * Hybrid with NN to leverage each modeling technique

  12. Motivation • Parallel Architecturemulti-core CPU to many-core GPU ( graphics + general purpose) • Massive Parallelism in Speech Recognition SystemNeural Networks, HMMs, etc. , are both Computation and Memory Intensive • GPGPU Evolvement * Dynamic Parallelism • * Concurrent Kernel Execution * Hyper-Q • * Device Partitioning * Virtual Memory Addressing * GPU-GPU Data Transfer, etc. • Previous works • Our goal is to use new modern GPU features to accelerate Speech Recognition

  13. Outline • Background & Motivation • HMM • GPGPU • Results • Future Work

  14. Hidden Markov Model Markov chains and processes are named after Andrey Andreyevich Markov(1856-1922), a Russian mathematician, whose Doctoral Advisor is PafnutyChebyshev. 1966, Leonard Baum described the underlying mathematical theory. 1989, Lawrence Rabiner wrote a paper with the most comprehensive description on it.

  15. Hidden Markov Model HMM Stages * causal transitional probabilities between states * observation depends on current state, not predecessor

  16. Hidden Markov Model • Forward • Backward • Expectation-Maximization

  17. HMM-Forward

  18. Hidden Markov Model • Forward • Backward • Expectation-Maximization

  19. HMM Backward I J t - 1 t t + 1 t + 2

  20. HMM-EM Variable Definitions: * Initial Probability * Transition Prob. Observation Prob. * Forward Variable Backward Variable Other Variables During Estimation: * theestimated state transition probability matrix, epsilon * the estimated probability in a particular state at time t, gamma * Multivariate Normal Probability Density Function Update Obs. Prob. From Gaussian Mixture Models

  21. HMM-EM

  22. Outline • Background & Motivation • HMM • GPGPU • Results • Future Work

  23. GPGPU Programming Model

  24. GPGPU GPU Hierarchical Memory System • Visibility • Performance Penalty [http://www.biomedcentral.com]

  25. GPGPU • Visibility • Performance Penalty [www.math-cs.gordon.edu]

  26. GPGPU GPU-powered Eco System 1) Programming Model * CUDA * OpenCL * OpenACC, etc. 2) High Performance Libraries * cuBLAS * Thrust * MAGMA (CUDA/OpenCL/Intel Xeon Phi) * Armadilo (C++ Linear Algebra Library), drop-in libraries etc. 3) Tuning/Profiling Tools * Nvidia: nvprof / nvvp * AMD: CodeXL 4) Consortium Standards Heterogeneous System Architecture (HSA) Foundation

  27. Outline • Background& Motivation • HMM • GPGPU • Results • Future Work

  28. Results Platform Specs

  29. Results Mitigate Data Transfer Latency Pinned Memory Size current process limit: ulimit -l ( in KB ) hardware limit: ulimit –H –l increase the limit: ulimit –S –l 16384

  30. Results

  31. Results A Practice to Efficiently Utilize Memory System

  32. Results

  33. Results Hyper-Q Feature

  34. Results Running Multiple Word Recognition Tasks

  35. Results

  36. Outline • Background& Motivation • HMM • GPGPU • Results • Future Work

  37. Future Work • Integrate with Parallel Feature Extraction • Power Efficiency Implementation and Analysis • Embedded System Development, Jetson TK1 etc. • Improve generosity, LMs • Improve robustness, Front-end noise cancelation • Go with the trend!

  38. QUESTIONS ?

More Related