html5-img
1 / 12

A New Bigram-PLSA Language Model for Speech Recognition

A New Bigram-PLSA Language Model for Speech Recognition. Mohammad Bahrani and Hossein Sameti. Department of Computer Engineering, Sharif University of Technology. EURASIP 2010. 報告者:郝柏翰. Outline. Introduction Review of the PLSA Model Combining Bigram and PLSA Models Experiments

vinny
Download Presentation

A New Bigram-PLSA Language Model for Speech Recognition

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. A New Bigram-PLSA Language Model for Speech Recognition Mohammad Bahrani and HosseinSameti Department of Computer Engineering, Sharif University of Technology EURASIP 2010 報告者:郝柏翰

  2. Outline • Introduction • Review of the PLSA Model • Combining Bigram and PLSA Models • Experiments • Conclusion

  3. Review of the PLSA Model • Bag-of-words • Conditional independent

  4. Combining Bigram and PLSA Models • Nie et al.’s Bigram-PLSA Model • Proposed Bigram-PLSA Model we relax the assumption of independence between the latent topicsand the context words and achieve a general form of the aspect model that considers the word history in the word document modeling.

  5. Parameter Estimation Using the EM Algorithm • E-step

  6. Parameter Estimation Using the EM Algorithm • M-step Let be the set of model parameters apply Bayes’ rule

  7. Parameter Estimation Using the EM Algorithm • Using Jensen’s inequality

  8. Jensen’s inequality

  9. Parameter Estimation Using the EM Algorithm • appropriate Lagrange multipliers

  10. Comparison with Nie et al.’s Bigram-PLSA Model. • The difference between our model and Nie et al.’s model is in the definition of the topic probability. • we relax the assumption of independence between the latent topics and the context words and achieve a general form of the aspect model that considers the word history in the word-document modeling. • The number of free parameters in our proposed model is in Nie et al.’s model is

  11. Experiments

  12. Experiments

More Related