1 / 1

Applications of Dirichlet Process Mixtures to Speaker Adaptation

Applications of Dirichlet Process Mixtures to Speaker Adaptation Amir Harati and Joseph Picone Marc Sobel Institute for Signal and Information Processing, Temple University Department of Statistics, Temple University. www.isip.piconepress.com. Introduction

fisseha
Download Presentation

Applications of Dirichlet Process Mixtures to Speaker Adaptation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Applications of Dirichlet Process Mixtures to Speaker Adaptation Amir Harati and Joseph Picone Marc Sobel Institute for Signal and Information Processing, Temple University Department of Statistics, Temple University www.isip.piconepress.com • Introduction • Major challenge in machine learning is generalization – can systems successfully recognize previously unseen data? • Controlling complexity is key to good generalization. • Data-driven modeling must be capable of preserving important modalities in the data. • The complexity of the model should be adapted to available data. • Application to Speaker Adaption • Adapting speaker independent models requires balancing complexity (e.g., parameter counts) with the amount of adaptation data. • Classical solutions to speaker adaptation include the use of Maximum Likelihood Linear Regression (MLLR) and regression trees. • Our goal is to replace the regression tree with a DPM and to achieve better performance at a comparable or reduced level of complexity. • Pilot Experiment: Monophones • Experiments use Resource Management (RM). • 12 different speakers with 600 training utterances. • Models use a single Gaussian mixture. • Word error rate (WER) can be reduced by more than 10%. • DPM preserves 6 clusters in the data while the regression tree finds only 2 clusters. • Observations • 10% reduction in WER over MLLR. • DPMs produce meaningful interpretations of the acoustic distances between data (e.g., broad phonetic classes). • Decreases in performance may be related to the tree construction approach. • Extensions To This Work • Generalize the assignment of a distribution to one cluster using soft-tying. • Expand statistical models to multiple mixtures. • The nonparametric Bayesian framework provides two important features: • number of clusters is not known a priori and could possibly grow with obtaining new data • generalization of parameter sharing and model (and state) tying. Figure 1. Model Complexity as a Function of Available Data. (a) 20 (b) 200 (c) 2000 data points Figure 3. Regression Tree vs. ADVP • Dirichlet Processes • Random probability measure over such that for any measurable partition over : • is the base distribution and acts like mean of DP. is the concentration parameter and is proportional to the inverse of the variance. • Example: Chinese restaurant process • Dirichlet Process Mixture (DPM) models place a prior on the number of clusters. • Hierarchical Dirichlet Processes (HDP): extend this by allowing sharing of parameters (atoms) across groups. • Training Algorithm • Train speaker independent (SI) model. Collect mixture components and their frequencies. • Generate samples for each component and cluster them using a DPM model. • Construct a tree structure of the final result using a bottom-up approach. Merge leaf nodes based on a Euclidean distance. • Assign clusters to each component using a majority vote scheme. • Compute the transformation matrix using a maximum likelihood approach. • Inference Algorithms • Three different variational algorithms: • Accelerated variational Dirichlet process mixture (AVDP). • Collapsed variational stick-breaking (CSB). • Collapsed Dirichlet priors (CDP). • Pilot Experiment: Triphones • ADVP better for moderate amounts of data while CDP and CSB better for larger amounts of data. • Future Work • Use HDP-HMM to discover new acoustic units. • Use HDP-HMM to relax the standard HMMleft-to-right topology. • Integrate HDP-HMM into the training loop so that training and testing operate under matched conditions. • Key References • E. Sudderth, “Graphical models for visual object recognition and tracking,” Ph.D. dissertation, MIT, May 2006. • J. Paisley, “Machine learning with Dirichlet and beta process priors: Theory and Applications,” Ph.D. Dissertation, Duke University, May 2010. • P. Liang, et al., “Probabilistic grammars and hierarchical Dirichlet processes”, The Handbook of Applied Bayesian Analysis, Oxford University Press, 2010. Figure 4. The Number of Discovered Clusters Figure 2. Mapping Speaker Independent Models to Speaker Dependent Models Figure 5. Regression Tree vs. Several DPM Inference Algorithms

More Related