1 / 15

Mithun Balakrishna , Dan Moldovan and Ellis K. Cave Presenter: Hsuan-Sheng Chiu

N-best list reranking using higher level phonetic, lexical, syntactic and semantic knowledge sources. Mithun Balakrishna , Dan Moldovan and Ellis K. Cave Presenter: Hsuan-Sheng Chiu.

Download Presentation

Mithun Balakrishna , Dan Moldovan and Ellis K. Cave Presenter: Hsuan-Sheng Chiu

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. N-best list reranking using higher level phonetic, lexical, syntactic and semantic knowledge sources Mithun Balakrishna , Dan Moldovan and Ellis K. Cave Presenter: Hsuan-Sheng Chiu

  2. M. Balakrishna, D. Moldovan, E.K Cave, “N-best list reranking using higher level phonetic, lexical, syntactic and semantic knowledge sources”, ICASSP 2006 • Substantial improvements can be gained by applying a strong postprocessing mechanism like reranking, even at a small n-best depth

  3. Proposed architecture • Reduce LVCSR WER by working these sources in tandem, complementing each other

  4. Features • Score of hypothesis

  5. Features (cont.) • Phonetic features • SVM Phoneme Class Posterior Probability

  6. Features (cont.) • LVCSR-SVM Phoneme Classification Accuracy Probability W

  7. Features (cont.) • Lexical Features • Use n-best list word boundary information (avoid string alignment) and score each hypothesis based on the presence of these dominant words • Syntactic Features • Use a immediate-head parser since the n-best reranking does not impose a left-to-right processing constraint • Semantic Features • Use a semantic parser ASSERT to extract statistical semantic knowledge

  8. Experimental results • Reranking score is a simple linear weighted combination of the individual scores from each knowledge source • The proposed reranking mechanism achieves the best WER improvements at the 15-best depth with 2.9 absolute WER reduction • This is not very surprising since nearly 80% of the total WER improvement list by the Oracle hidden within the 20-best hypotheses

  9. EFFICIENT ESTIMATION OF LANGUAGE MODEL STATISTICS OF SPONTANEOUS SPEECH VIA STATISTICAL TRANSFORMATION MODEL Yuya Akita, Tatsuya Kawahara

  10. Efficient estimation of language model statistics of spontaneous speech via statistical transformation model • Estimate LM statistics of spontaneous speech from a document-style text database • Machine translation model (P(X|Y): translation model) • Transformation model => counts of n-word sequence

  11. SMT-based transformation

  12. Three characteristics of spontaneous speech • Insertion of fillers • Fillers must be removed from transcripts for documentation • Deletion of postpositional particles • Indicating the nominative case re often omitted while possessive case are rarely dropped • Substitution of colloquial expressions • Colloquial expression must be always corrected in document-style text

  13. Transformation probability • Back-off scheme for POS-based model

  14. Experimental setup • Document-style text (for baseline model) • National Congress of Japan • 71M words • Training data for transformation model • 666K words • Test data • 63K words • Comparison corpus • Corpus of Spontaneous Japan • 2.9M words

  15. Experimental results

More Related