1 / 16

Bang-Xuan Huang Department of Computer Science & Information Engineering

A word graph algorithm for large vocabulary continuous speech recognition Stefan Ortmanns, Hermann Ney, Xavier Aubert. Bang-Xuan Huang Department of Computer Science & Information Engineering National Taiwan Normal University. Outline. Introduction Review of the integrated method

Download Presentation

Bang-Xuan Huang Department of Computer Science & Information Engineering

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. A word graph algorithm for large vocabularycontinuous speech recognitionStefan Ortmanns, Hermann Ney, Xavier Aubert Bang-Xuan Huang Department of Computer Science & Information Engineering National Taiwan Normal University

  2. Outline • Introduction • Review of the integrated method • Word graph method • Experiment result

  3. Introduction • Why LVCSR difficult ? • This paper describes a method for the construction of a word graph(or lattice) for large vocabulary, continuous speech recognition. • The advantage of a word graph is that the final search at the word level using a complicated language model can be achieved. • The difficulty in efficiently constructing a good word graph is the following: the start time of a word depends in general on the predecessor words. • Integrated method-Word conditioned lexical tree search algorithm -bigram - trigram

  4. Review of the integrated method(1/4)

  5. Review of the integrated method(2/4) • where σvmax(t, s)is the optimum predecessor state for the hypothesis (t, s) and predecessorword v. • q(xt, s|σ) is the product of transition and emission probabilities of the Hidden Markov models used for the context dependent or independent phonemes. • At word boundaries, we have to find the best predecessor word v for each word w. To this purpose, we define: • where the state Swdenotes the terminal state of word w in the lexical tree.

  6. Review of the integrated method(3/4) • To propagate the path hypothesis into the lexical tree hypotheses or to start them up if they do not exist yet, we have to pass on the score and the time index before processing the hypotheses for time frame t: • Garbage collection • Pruning techniques and language model look-ahead

  7. Review of the integrated method(4/4) • Extension to trigram language models

  8. Word graph method • fundamental problem of word graph construction: Hypothesizing a word w and its ending time t, how can we find a limited number of “most likely” predecessor words? v w ….. t

  9. Word-Pair Approximation: for each path hypothesis, the position of word boundary between the last two words is independent of the other words of this path hypothesis

  10. Word graph generation algorithm

  11. Word-conditioned Search – A virtual tree copy is explored for each active LM history – More complicated in HMM state manipulation (dependent on the LM complexity) • Time-conditioned Search – A virtual tree copy is being entered at each time by the word end hypotheses of the same given time – More complicated in LM-level recombination

  12. Garbage collection • For large-vocabulary recognition, it is essential to keep the storage costs as low as possible. • To reduce the memory requirements back pointers and traceback arrays are needed, whereas the traceback arrays are used to record the decisions about the best predecessor word for each word start-up. • In order to removethese obsolete hypothesis entries from the traceback arrays, we apply a garbagecollection or purging method. • In principle, thisgarbage collection process can be performed every time frame, but to reduce theoverhead, it is sufficient to perform it in regular time intervals, say every 50th timeframe.

  13. Pruning techniques and language model look-ahead

More Related