1 / 26

A PLSA-based Language Model for Conversational Telephone Speech David Mrva and Philip C.Woodland

A PLSA-based Language Model for Conversational Telephone Speech David Mrva and Philip C.Woodland. 2004/12/08 邱炫盛. Outline. Language Model PLSA Model Experimental Results Conclusion. Language Model. The task of a language model is to calculate probability n-gram model

onawa
Download Presentation

A PLSA-based Language Model for Conversational Telephone Speech David Mrva and Philip C.Woodland

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. A PLSA-based Language Model for Conversational Telephone SpeechDavid Mrva and Philip C.Woodland 2004/12/08 邱炫盛

  2. Outline • Language Model • PLSA Model • Experimental Results • Conclusion

  3. Language Model • The task of a language model is to calculate probability • n-gram model • Range of dependencies is limited to n-words • Information is ignored

  4. Language Model (cont.) • Topic-based language model • Latent Semantic Analysis • Topic-based language model • PLSA-based language model

  5. PLSA Model • PLSA is general machine learning technique for modeling the co-occurrences of events. • Co-occurrence of words and documents • Hidden variable = aspect • PLSA in this paper is a mixture of unigram distribution.

  6. P(d) P(w|d) d w P(d) P(t|d) P(w|t) d t w PLSA Model (cont.) Graphical Model Representation

  7. di P(wj|z1) P(z1|di) P(wj|z2) P(z2|di) ∑ w1 w2 w3…….wj P(zk|di) P(wj|zk) PLSA Model (cont.)

  8. PLSA Model (cont.) M: number of words in vocabulary N: number of documents in training collection K: number of aspects or topics

  9. PLSA Model (cont.)

  10. PLSA Model (cont.)

  11. conditional independent

  12. PLSA Model (cont.)

  13. PLSA Model (cont.)

  14. PLSA Model (cont.)

  15. PLSA Model (cont.) Use PLSA in language model: P(zk|di) are used as mixture weights when calculating the word probability. The history hi is used instead of di to re-estimate these weight on the test set.

  16. PLSA Model (cont.)

  17. PLSA Model (cont.) Account for the whole document history of word irrespective of the document length. Have no means for representing the word order because of mixture of unigram distribution. Combine n-gram with PLSA: When PLSA used in decoding, Viterbi-based decoder is not suitable. Two-pass decoder: • First pass: • n-gram, output a confidence score • Second pass: • PLSA, rescoring the lattices

  18. PLSA Model (cont.) • During the re-scoring, the PLSA history comprises of all segments in a document but the current segment. • PLSA history is fixed for all words in a given segment. • Refer to “history “ as “context” (ctx). It contains both past and future words.

  19. Experimental Results Two Test Sets • NIST’s Hub5 speech-to-text evaluation 2002(eval02) • Switchboard I and II • 62k words,19k form Switchboard I • NIST’s Rich Transcription Spring 2003 CTS speech-to-text evalation(eval03) • Switchboard II phase 5 and Fisher • 74k words, 36k from Fisher

  20. Experimental Results (cont.)

  21. Experimental Results (cont.) • The reduction is greater if PLSA’s training text relates to the test set. • PP of (ref.ctx,10) <PP of (rec.ctx,10) • b=10 is the best value • Use of confidence score makes the PLSA model less sensitive to b

  22. Experimental Results (cont.)

  23. Experimental Results (cont.) • baseline: n-gram trained on 20M words of Fisher transcripts. Increased to 500 classes • PLSA: 750 aspects,100 EM iterations • Separate into eval03dev,eval03tst • Interpolation weight of the word and class-based n-gram were set to minimize perplexity. • A slight improvement when side-based documents were used.

  24. Experimental Results (cont.) • b=100 is best value • PLSA model needs much more data to estimate the topic of Fisher than SwbI • Having a long context is very important.

  25. Experimental Results (cont.)

  26. Conclusion • PLSA with the suggested modifications in a language model reduces perplexity. • Future work: • Re-score lattices to calculate WERs • Combine semantics-oriented model with syntax-based language model

More Related